A U.K. woman was photographed standing in a mirror where her reflections didn’t match, but not because of a glitch in the Matrix. Instead, it’s a simple iPhone computational photography mistake.

  • xantoxis@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    8
    ·
    11 months ago

    Program against it? It’s a camera. Put what’s on the light sensor into the file, you’re done. They programmed to make this happen, by pretending that multiple images are the same image.

    • Nine@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      11
      ·
      11 months ago

      That’s over simplified. There’s only so much you can get on a sensor at the sizes in mobile devices. To compensate there’s A LOT of processing that goes on. Even higher end DSLR cameras are doing post processing.

      Even shooting RAW like you’re suggesting involves some amount of post processing for things like lens corrections.

      It’s all that post processing that allows us to have things like HDR images for example. It also allows us to compensate for various lighting and motion changes.

      Mobile phone cameras are more about the software than the hardware these days

      • cmnybo@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        3
        ·
        11 months ago

        With a DSLR, the person editing the pictures has full control over what post processing is done to the RAW files.

        • Nine@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          11 months ago

          Correct, I was referring to RAW shot on mobile not a proper DLSR. I guess I should have been more clear about that. Sorry!

          • uzay@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            11 months ago

            You might be confounding a RAW photo file and the way it is displayed. A RAW file isn’t even actually an image file, it’s a container containing the sensor pixel information, metadata, and a pre-generated JPG thumbnail. To actually display an image, the viewer application either has to interpret the sensor data into an image (possible with changes according to its liking) or just display the contained JPG. On mobile phones I think it’s most likely that the JPG is generated with pre-applied post-processing and displayed that way. That doesn’t mean the RAW file has any post-processing applied to it though.

      • randombullet@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        11 months ago

        Raw files from cameras have meta data that tells raw converters the info of which color profile and lenses it’s taken with, but any camera worth using professionally doesn’t have any native corrections on raw files. However, in special cases as with lenses with high distortion, the raw files have a distortion profile on by default.

        • Nine@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          11 months ago

          Correct, I was referring to RAW shot on mobile devices not a proper DSLR. That was my observations based off of using the iPhone raw and android raw formats.

          This isn’t my area of expertise so if I’m wrong about that aspect too let me know! 😃

        • Nine@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          11 months ago

          So what was I wrong about? I’m always happy to learn from my mistakes! 😊

          Do you have some whitepapers I can reference too?

            • Nine@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              11 months ago

              Gonna provide more information or is this just a trust me bro situation?

              • SpaceNoodle@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                edit-2
                11 months ago

                Not sure what I’d have to gain from just lying on the Internet about inconsequential things.

                Also not sure I can disclose too many technical details due to NDAs, but I’ve worked on camera stacks on multiple Android-based devices. Yes, there’s tons of layers of firmware and software throughout the camera stack, but it very importantly does not alter consequential elements of images, and concentrates on image quality, not image contents.

                While the sensors in smartphones might not be as physically large as those in DSLRs - at least, in general - there’s still significant quality in the raw sensor data that does not inherently require the sort of image stitching that Apple is doing.

          • SpaceNoodle@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            11 months ago

            🙄

            Edit: oh, you’re the actual illiterate person from another post. Thanks for stalking me.

            • schmidtster@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              11 months ago

              You think too highly of yourself.

              When you comment spam just about every thread you’ll come across people multiple times.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      11 months ago

      What’s on the light sensor when? There’s no shutter, it can just capture a continuous stream of light indefinitely.

      Most people want a rough representation of what’s hitting the sensor when they push the button. But they don’t actually care about the sensor, they care about what they can see, which doesn’t include the blur from the camera wobbling, or the slight blur of the subject moving.
      They want the lighting to match how they perceived the scene, even though that isn’t what the sensor picked up, because your brain edits what you see before you comprehend the image.

      Doing those corrections is a small step to incorporating discontinuities in the capture window for better results.