• conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Yeah, there’s a reason any movie attempting 3D CG with any budget at all has used path tracing for years. It’s objectively massively higher quality.

    You don’t need upscaling or denoising (the “AI” they’re talking about) to do raster stuff, but realistic lighting does a hugely better job, regardless of the art style you’re talking about. It’s not just photorealism, either. Look at all Disney’s animated stuff. Stuff like Moana and Elemental aren’t photorealistic and aren’t trying to be, but they’re still massively enhanced visually by improving the realism of the behavior of light, because that’s what our eyes understand. It takes a lot of math to handle all those volumetric shots through water and glass in a way that looks good.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      2 months ago

      Yep. The thing is, even if you’re on high end hardware doing offline CGI you’re using these techniques for denoising. If you’re doing academic research you’re probably upscaling with machine learning.

      People get stuck on the “AI” nonsense, but ultimately you need upscaling and denoising of some sort to render certain tier of visuals. You want the highest quality version of that you can fit in your budgeted frame time. If that is using machine learning, great. If it isn’t, great as well. It’s all tensor math anyways, it’s about using your GPU compute in the most efficient way you can.