So, although the graphics cards have been out for a while now (although still a little hard to get hold of), the games with RTX support haven’t really been there. So now that some games are coming out with shiny RTX options, we’re going to take a look at performance and ask what on earth RTX is actually used for!
Seeing the World Through Ray Tracing Goggles…
From a technical standpoint, I was excited when NVIDIA announced that their new generation of GPUs would have something called RTX and that this would be the first set of graphics cards capable of real-time ray tracing in games. Ray tracing in itself isn’t a new concept when you’re a big film nerd, as this is how realistic light has been simulated in CG animation for a while now.
Without going into too much detail, the whole point of ray tracing is allowing you to see how light sources interact realistically within an environment. As the name suggests, you follow (trace) the path of the rays of light, and you calculate how they interact with the environment. That calculation is a huge amount of math based on reflection, refraction, and absorption of light on different surfaces and materials—so it lends itself well to computer calculation.
However, if you try and wrap your head around how many individual rays of light come from our sun and interact with the world around you, we would be there for an age just working things out for one image. So, in truth, in simulations we do things in reverse. Rather than trying to calculate the near-infinite lighting particulars from the sun, we start at the position from which we are viewing a scene, and work backwards along the path of light, discarding everything that we can’t see.
For film, this still-complex calculation is done at the point where you place your virtual camera in an environment. For gaming, although this is fundamentally the same technique, the camera position and visible scene are constantly changing as the player moves around an environment.
Where this release gained a lot of attention is that in an animated/CG film, you would normally use hundreds or even thousands of rays of light; with RTX, in theory, developers could emplay a baby version of the technique in a game, taking into account the field of view and the resolution you are playing at.
If you’re interested in seeing a little more about this technique, I highly recommend watching this video from Disney Animation Studios.
RTX ON: Lighting Up Your World
So, while I will give some thoughts on why NVIDIA is doing this now in the next section, we need to take a look at how this is performing with the current hardware. When NVIDIA’s new cards were first announced for the mainstream with RTX, the folks over at Unreal Engine showed the potential of this technology (along with Microsoft’s DXR framework for Windows) with this video:
The thing that is important for context here, though, is that this is shot like a film rather than showing gaming performance. It’s akin to seeing a cutscene in a game, rather than you playing through the environment. This allowed them to create what is a really nice scene, which—although it does show how the ray tracing technique is used—doesn’t give any indication of gaming performance.
NVIDIA was oddly keen to not talk about frame rates either, preferring to focus on the realism of what was shown. Little was said about whether we are going to be able to play games at more than a slideshow with these techniques fully enabled
The truth is somewhere in between.
The RTX cards do show a modest improvement in traditional gaming rendering over their 10-series cards, yet not a jump that is commensurate with the price jump. The main selling point is meant to be ray tracing, and if you’ve been keeping track of the number of times I’ve mentioned that this is a very intensive task, then you’re probably not surprised by the results:
Nvidia’s 10-series vs RTX
So, if we consider the big boy of the 10-series in the GTX 1080 Ti, they’re now retailing around $700-800 for standard models—or if you’re going for something a little bit meaty like the AORUS I have setting you back over $900. (That is, those are the prices if you can get hold of them, as people have been buying them up like crazy since RTX release.) A flagship RTX 2080 Ti card like this ZOTAC Gaming model will set you back a wallet-clearing $1425. That’s not so bad, though, right? I mean, if we’re getting around a 52% increase in performance, then all is good…
Unfortunately for NVIDIA (well, more for us consumers) the actual frame rate in standard game rendering is closer to the 20-25% improvement. Which is still quite nice, but that’s not great relative to the price increase.
So with all this extra power for rendering, that’ll mean we get awesome ray tracing? Again, unfortunately not, as that technology is built around a new core type which is distinct from the CUDA core setup used in the 10-series cards. As a result, very few currently-available games get any advantage from the new tech.
There’s piles of videos out there showing frame rates in games with RTX on, now that Microsoft released their DXR framework after countless delays and NVIDIA fixed their BSOD…. also, a representative from NVIDIA replied to reports of cards literally setting themselves on fire as follows:
Limited test escapes from early boards caused the issues some customers have experienced with RTX 2080 Ti Founders Edition. We stand ready to help any customers who are experiencing problems. Please visit www.nvidia.com/support to chat live with the NVIDIA tech support team (or to send us an email) and we’ll take care of it.
Although expectations were low going into this shiny new experience, I don’t think people were expecting a total of just 50-70 FPS at 1080p with the RTX 2080 Ti (with RTX on ultra or low). And if you’re wanting to play at higher resolutions… don’t bother. Also, if you bought one of the lower RTX cards with wanting a traditional gaming experience improvement over 10-series, and you’re thinking about trying the tech… you really might want to wait until the next generation. The RTX 2080 struggles to hit even 60fps in low RTX mode at 1080p and you can’t even get to 50fps with the RTX 2070.
Not for Gamers (but it’s Great for Filmmakers)
That’s a very doom-and-gloom heading, I know. Yet there’s a few reasons for it:
Firstly, as I mentioned above, there’s a good reason why film animation can take a long time when rendering out this level of detail. For a long time now, although plenty of studios and software can handle ray tracing, there hasn’t been a good way to show this sort of thing for film in real time.
Autodesk have talked about light in their work for years within their system with starting at the basics, and now they’re going into more advanced work with the new architecture and how it allows them to really improve render performance:
Spidey has never looked better! Yet, watch how the scene re-renders each time they move the camera around in the demo. It’s not quite seamless, which—when you think about the detail in the suit and how those reflections and shadows are interacting with it—I’m not really surprised about.
But this is the disconnect I think, with RTX right now: there’s clearly potential in the hardware, but it is such a hard sell when there’s not a massive jump in traditional performance in gaming, and when there’s frankly an absolute tanking of performance with RTX enabled. I doubt it’ll be mainstream for a while, just because gamers have been so used to pushing that FPS counter upward that they’re not going to take a massive performance drop for the sake of some nicer shadows and reflections right now.
It isn’t the end of the world, though. One of the best things about this, from a technical level, is that improvements can be made over time. When Toy Story was made back in 1995, it famously took them around 4 hours to render each frame. When it came to the fancier re-release 15 years later, those frames were taking an average of 2-4 minutes. So, a nice improvement; but keep in mind Pixar have a massive render farm at their disposal, and you have a single card. All things considered, I’m not surprised some of them are setting themselves on fire.
So, what is the takeaway here? It’s a nice ‘first go,’ but there’s certainly room for improvement. And while I really wanted to get my hands on a 1080 Ti as soon as I had saved money for one, I (like many others) am probably going to wait a few generations before jumping into that super-realistic-looking RTX puddle.
Yet, what do you think? Are you thinking of buying, have you already purchased, or are you like me and you’re holding out for the next generation (or two)? Let me know in the comments.