Aside from fps, is there any difference in quality of raytracing in Nvidia and AMD or is it the same(like they say that DLSS is better than FSR)?
Aside from fps, is there any difference in quality of raytracing in Nvidia and AMD or is it the same(like they say that DLSS is better than FSR)?
DLSS works on Tensor cores only available to Nvidia. FSR works on anything. This means that DLSS is more specialised and, if implemented in a game properly, will work better.
GSync only works on Nvidia cards and GSync monitors, whilst FreeSync works on FreeSync and GSync monitors with any gpu.
Now ray tracing works on RT cores for Nvidia and I believe AMD have something similar. The key difference with the former technologies is that ray tracing doesn’t have an Nvidia or AMD version, the tech is part of the DirectX 12 Ultimate specification. (I think Vulkan has something similar). Both GPU makers use DX12 so they use the same software to apply ray tracing.
The fact of the matter is that the RT cores of Nvidia are more effective than the ones AMD utilises. AMD usually combats that by just adding more cores.
In the end, it all hangs on implementation. In some games, AMD will be better because the game devs have optimised it for AMD GPUs. In most games, Nvidia will be better. I suggest looking up benchmarks for games you play with and without ray tracing.
To clarify for the purpose of answering OPs question. The quality will be the same because it’s the same code in both cases. But the performance, as in how many FPS, you get will most often differ.
Couldn’t there be a difference between demonising algorithms if those are baked in the drivers?
I have no clue what you mean
Autocorrect, meant denoising
Most likely. Bottom line it’s a total package kinda deal. If it were just one or two components, it’d be pretty easy to improve. It’s the synergy between the graphics API, the implementation in the game, the GPU, GPU driver, the CPU, motherboard chipset etc. All working together.