It’s been over two decades since Intel released a discrete graphics card (the last was the disappointing Intel740, which hit the market back in 1998). That is set to change in 2020, when Intel intends to release its new discrete GPU, the Intel Xe.
Intel is being characteristically tight-lipped about what we can expect with their new cards, but if the Xe (pronounced “ex-ee”) proves to be a reliable alternative to Nvidia or AMD cards, PC builders might soon have more options at their disposal.
This article will tell you what we know about Intel Xe so far.
Pricing and Release Date
Speculation on pricing for the Xe ran high back in August, when Intel chief architect and senior vice president of architecture Raja Koduri appeared in an interview with on the Russian YouTube channel Pro Hi-Tech. According to a translation on Reddit, Koduri reportedly said that the Xe units would be “GPUs for everyone at [a] $200 price, then the same architecture but with the higher amount of HBM memory for data centers.”
The video is now taken down, and Intel later clarified that Koduri was speaking more broadly about GPU pricing—that $200 is generally a mass-market/entry price for modern cards. Intel also said that their “strategy revolves around going for the full stack that ranges from Client to the Data Center,” according to Digital Trends.
What seems clear is that Intel is planning a full range of discrete GPUs, from budget cards in the $200 range to enthusiast cards to data-center monsters.
Intel also hasn’t announced an exact release date, other than sometime around 2020. However, when Koduri tweeted a picture of the vanity license plate on his Tesla, which read “ThinkXE,” many noted the expiration date of June 2020 as a possible hint to the Xe release date.
Architecture and Performance
A driver leak this summer indicated at least four upcoming discrete graphics cards. Some clever sleuthing determined that there might be configurations with 128, 256, and 512 “execution units” in each card. Execution units are Intel’s equivalent to Nvidia’s CUDA cores, which relate to the speed and power of the card—the more you have, the better the card.
As a reference point, Intel’s 10th-generation Ice Lake processors max out at 64 EUs, so the new Xe cards could be significantly more powerful than Intel’s current line of chips with integrated graphics. These numbers also align with the rumored pricing of around $200—safely within the budget and midrange pricing brackets.
Recent improvements in iGPU architecture also hint at what Intel might be working on with Xe. The Gen 11 iGPU featured on Ice Lake processors compute at a whopping 1.12 TFLOPs, putting it on par with GeForce MX, Nvidia’s line of mobile processors. It also adds adaptive sync to minimize motion blur and screen tearing. And it supports HDR for better contrast and wider range of colors. The new iGPU is a massive improvement over previous integrated graphics in Intel chips, which could be very good news for low-tier and laptop gamers.
With Nvidia’s full-throated support of real-time ray tracing and AMD’s announced plans to include the feature, it seems ray tracing is all but assured for future GPUs. Intel has already announced ray tracing support for the Xe architecture… sort of. In a statement, Jim Jeffers, Intel’s senior principal engineer, said, “Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.”
Importantly, that statement doesn’t say anything about ray tracing in games, but since the architecture will have ray tracing implementation in its enterprise-class hardware, most analysts presume Intel will also include it in consumer cards.
While we don’t have the full details of Intel’s Xe GPUs, we do have a few hints as to what we can expect in terms of performance, price, and features. Intel’s putting a lot of effort into this new lineup of discrete GPUs. If they can pull off a reliable competitor to AMD and Nvidia cards, PC builders will be the true victors, finally having more than two options in the GPU marketplace.