They used ray tracing for the hit registration so that’s presumably why.
It’s a really interesting idea … presumably that means there are some really flashy guns and there is a very intricate damage system that runs at least partially on the GPU.
W10 OK slow, but OK. W11 so much jank and buggy bullshit. I moved allmy games to Linux. With Proton and Vulkan all my games work including the RTX settings.
really flashy guns and there is a very intricate damage system that runs at least partially on the GPU.
Short opinion: no, CPU’s can do that fine (possibly better) and it’s a tiny corner of game logic.
Long opinion: Intersecting projectile paths with geometry will not gain advantages being moved from CPU to GPU unless you’re dealing with a ridiculous amount of projectiles every single frame. In most games this is less than 1% of CPU time and moving it to the GPU will probably reduce overall performance due to the latency costs (…but a lot of modern engines already have awful frame latency, so it might fit right in fine).
You would only do this if you have been told by higher ups that you have to OR if you have a really unusual and new game design (thousands of new projectile paths every frame? ie hundreds of thousands of bullets per second). Even detailed multi-layer enemy models with vital components is just a few extra traces, using a GPU to calc that would make the job harder for the engine dev for no gain.
Fun answer: checkout CNlohr’s noeuclid. Sadly no windows build (I tried cross compiling but ended up in dependency hell), but still compiles and runs under Linux. Physics are on the GPU and world geometry is very non-traditional. https://github.com/cnlohr/noeuclid
Honestly, I’m not interested in debating it’s validity especially with the exact details of what they’ve done still under wraps … I have no idea if they are really on to something or not and the details are scarce, but I did find the article I read.
“We can leverage it [ray tracing] for things we haven’t been able to do in the past, which is giving accurate hit detection”
“So when you fire your weapon, the [hit] detection would be able to tell if you’re hitting a pixel that is leather sitting next to a pixel that is metal”
“Before ray tracing, we couldn’t distinguish between two pixels very easily, and we would pick one or the other because the materials were too complex. Ray tracing can do this on a per-pixel basis and showcase if you’re hitting metal or even something that’s fur. It makes the game more immersive, and you get that direct feedback as the player.”
It sounds like they’re assigning materials based off the pixels of a texture map, rather than each mesh in a model being a different material. ie you paint materials onto a character rather than selecting chunks of the character and assigning them.
I suspect this either won’t be noticeable at all to players or it will be a very minor improvement (at best). It’s not something worth going for in exchange for losing compatibility with other GPUs. It will require a different work pipeline for the 3D modellers (they have to paint materials on now rather than assign them per-mesh), but that’s neither here nor there, it might be easier for them or it might be hell-awful depending on the tooling.
This particular sentence upsets me:
Before ray tracing, we couldn’t distinguish between two pixels very easily
Uhuh. You’re not selling me on your game company.
“Before” ray tracing, the technology that has been around for decades. That you could do on a CPU or GPU for this very material-sensing task without the players noticing for around 20 years. Interpolate UVs across the colliding triangle and sample a texture.
I suspect the “more immersion” and “direct feedback” are veils over the real reasoning:
During NVIDIA’s big GeForce RTX 50 Series reveal, we learned that id has been working closely with the GeForce team on the game for several years (source)
With such a strong emphasis on RT and DLSS, it remains to be seen how these games will perform for AMD Radeon users
No-one sane implements Nvidia or AMD (or anyone else) exclusive libraries into their games unless they’re paid to do it. A game dev that cares about its players will make their game run well on all brands and flavours of graphics card.
At the end of the day this hurts consumers. If your games work on all GPU brands competitively then you have more choice and card companies are better motivated to compete. Whatever amount of money Nvidia is paying the gamedevs to do this must be smaller than what they earn back from consumers buying more of their product instead of competitors.
Well like, basically every shooter currently uses a hitbox to do the hitscan and that never matches the model 1:1. The hitboxes are typically far less detailed and the weak points are just a different part of the hitbox that is similarly less detailed.
I think what they’re doing is using the RT specialized hardware to evaluate the bullet path (just like a ray of light from a point) more cheaply than can be traditionally done on the GPU (effectively what Nvidia enabled when they introduced hardware designed for ray tracing).
If I’m guessing correctly, it’s not so much that they’re disregarding the mesh but they’re disregarding hitbox design. Like, the hit damage is likely based on the mesh and the actual rendered model vs the simplified hitbox … so there’s no “you technically shot past their ear, but it’s close enough so we’re going to call it a headshot” sort of stuff.
If you’re doing a simulated shotgun blast that could also be a hundred pellets being simulated through the barrel heading towards the target as well. Then add in more enemies that shoot things and a few new gun designs and… maybe it starts to make sense.
It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.
Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.
This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.
Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)
Not disputing you, but hasn’t hitscan been a thing for decades? Or is what you’re saying a different thing?
Also, I always thought that the CPU and GPU either couldn’t communicate with each other, or that it was a very difficult problem to solve. Have they found a way to make this intercommunication work on a large scale? Admittedly I only scanned the article quickly, but it looks like they’re only talking about graphics quality. I’d love to know if they’re leveraging the GPU for more than just visuals!
It’s a different thing. This is pixel perfect accuracy for the entire projectile. There aren’t hotboxes as I understand it, it’s literally what the model is on the screen.
They used ray tracing for the hit registration so that’s presumably why.
It’s a really interesting idea … presumably that means there are some really flashy guns and there is a very intricate damage system that runs at least partially on the GPU.
W10 OK slow, but OK. W11 so much jank and buggy bullshit. I moved allmy games to Linux. With Proton and Vulkan all my games work including the RTX settings.
Short opinion: no, CPU’s can do that fine (possibly better) and it’s a tiny corner of game logic.
Long opinion: Intersecting projectile paths with geometry will not gain advantages being moved from CPU to GPU unless you’re dealing with a ridiculous amount of projectiles every single frame. In most games this is less than 1% of CPU time and moving it to the GPU will probably reduce overall performance due to the latency costs (…but a lot of modern engines already have awful frame latency, so it might fit right in fine).
You would only do this if you have been told by higher ups that you have to OR if you have a really unusual and new game design (thousands of new projectile paths every frame? ie hundreds of thousands of bullets per second). Even detailed multi-layer enemy models with vital components is just a few extra traces, using a GPU to calc that would make the job harder for the engine dev for no gain.
Fun answer: checkout CNlohr’s noeuclid. Sadly no windows build (I tried cross compiling but ended up in dependency hell), but still compiles and runs under Linux. Physics are on the GPU and world geometry is very non-traditional. https://github.com/cnlohr/noeuclid
https://www.pcguide.com/news/doom-the-dark-ages-promises-accurate-hit-detection-with-help-from-cutting-edge-ray-tracing-implementation/
Honestly, I’m not interested in debating it’s validity especially with the exact details of what they’ve done still under wraps … I have no idea if they are really on to something or not and the details are scarce, but I did find the article I read.
Ooh thankyou for the link.
It sounds like they’re assigning materials based off the pixels of a texture map, rather than each mesh in a model being a different material. ie you paint materials onto a character rather than selecting chunks of the character and assigning them.
I suspect this either won’t be noticeable at all to players or it will be a very minor improvement (at best). It’s not something worth going for in exchange for losing compatibility with other GPUs. It will require a different work pipeline for the 3D modellers (they have to paint materials on now rather than assign them per-mesh), but that’s neither here nor there, it might be easier for them or it might be hell-awful depending on the tooling.
This particular sentence upsets me:
Uhuh. You’re not selling me on your game company.
“Before” ray tracing, the technology that has been around for decades. That you could do on a CPU or GPU for this very material-sensing task without the players noticing for around 20 years. Interpolate UVs across the colliding triangle and sample a texture.
I suspect the “more immersion” and “direct feedback” are veils over the real reasoning:
No-one sane implements Nvidia or AMD (or anyone else) exclusive libraries into their games unless they’re paid to do it. A game dev that cares about its players will make their game run well on all brands and flavours of graphics card.
At the end of the day this hurts consumers. If your games work on all GPU brands competitively then you have more choice and card companies are better motivated to compete. Whatever amount of money Nvidia is paying the gamedevs to do this must be smaller than what they earn back from consumers buying more of their product instead of competitors.
Well like, basically every shooter currently uses a hitbox to do the hitscan and that never matches the model 1:1. The hitboxes are typically far less detailed and the weak points are just a different part of the hitbox that is similarly less detailed.
I think what they’re doing is using the RT specialized hardware to evaluate the bullet path (just like a ray of light from a point) more cheaply than can be traditionally done on the GPU (effectively what Nvidia enabled when they introduced hardware designed for ray tracing).
If I’m guessing correctly, it’s not so much that they’re disregarding the mesh but they’re disregarding hitbox design. Like, the hit damage is likely based on the mesh and the actual rendered model vs the simplified hitbox … so there’s no “you technically shot past their ear, but it’s close enough so we’re going to call it a headshot” sort of stuff.
If you’re doing a simulated shotgun blast that could also be a hundred pellets being simulated through the barrel heading towards the target as well. Then add in more enemies that shoot things and a few new gun designs and… maybe it starts to make sense.
It sounds like they’re tying the effect of attacks to the actual fine detail game textures/materials, which I guess are only available on the GPU? It’s a weird thing to do and a bad description of it IMO, but that’s what I got from that summary. It wouldn’t be anywhere near as fast as normal hitscan would be on the CPU, and it also takes GPU time which generally is more limited with the thread count on modern processors being what it is.
Since there is probably only 1 bullet shot most of the time on any given frame, the minimum size of a dispatch on the GPU is usually 32-64 cores (out of maybe 1k-20k), just to calculate this one singular bullet with a single core. GPU cores are also much slower than CPU cores, so clearly the only possible reason to do this is if the data needed literally only exists on the GPU, which it sounds like it does in this case. You would also first have to transfer that there was a shot taken to the GPU, which then would have to transfer that data back to the CPU, coming with a small amount of latency both ways.
This also only makes sense if you already use raytracing elsewhere, because you generally need a BVH for raytracing and these are expensive to build.
Although this is using raytracing, the only reason not to support cards without hardware raytracing is that it would take more effort to do so (as you would have to maintain both a normal raytracer and a DXR version)
Not disputing you, but hasn’t hitscan been a thing for decades? Or is what you’re saying a different thing?
Also, I always thought that the CPU and GPU either couldn’t communicate with each other, or that it was a very difficult problem to solve. Have they found a way to make this intercommunication work on a large scale? Admittedly I only scanned the article quickly, but it looks like they’re only talking about graphics quality. I’d love to know if they’re leveraging the GPU for more than just visuals!
It’s a different thing. This is pixel perfect accuracy for the entire projectile. There aren’t hotboxes as I understand it, it’s literally what the model is on the screen.
Ooh, that makes sense. Sounds like it could be much cheaper to process than heavy collision models. Thanks for the clarification!