When Adobe Inc. released its Firefly image-generating software last year, the company said the artificial intelligence model was trained mainly on Adobe Stock, its database of hundreds of millions of licensed images. Firefly, Adobe said, was a “commercially safe” alternative to competitors like Midjourney, which learned by scraping pictures from across the internet.

But behind the scenes, Adobe also was relying in part on AI-generated content to train Firefly, including from those same AI rivals. In numerous presentations and public postsabout how Firefly is safer than the competition due to its training data, Adobe never made clear that its model actually used images from some of these same competitors.

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      7 months ago

      Depends how it’s done.

      Full generative images would definitely start creating a copying error type problem.

      However it’s not quite that simple. An AI system can be used to distort an image. The derivatives force the learning AI to notice different things. This can vastly extend the pool of data to learn from, and so improve the end AI.

      Adobe obviously decided that the copying errors were worth the extended datasets.

    • General_Effort@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      33
      ·
      7 months ago

      No.

      I feel I should explain this but I got nothing. An image is an image. Whether it’s good or bad is a matter of personal preference.

      • hyper@lemmy.zip
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        edit-2
        7 months ago

        I’m not so sure about that… if you train an ai on images with disfigured anatomy which it thinks is the “right” way it will generate new images with messed up anatomy. It gives a feedback loop, like when a mic picks up its own signal.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          5
          ·
          7 months ago

          Well, you wouldn’t train on images that you consider bad, or rather you’d use them as examples for what not to do.

          Yes, you have to be careful when training a model on its own output. It already has a tendency to produce that, so it’s easy to “overshoot”, so to say. But it’s not a problem in principle. It’s also not what’s happening here. Adobe doesn’t use the same model as Midjourney.

        • abhibeckert@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          6
          ·
          edit-2
          7 months ago

          Midjourney doesn’t generate disfigured anatomy. You’re think of Stable Diffusion which is a smaller model that can generate an image in 30 seconds on my laptop GPU. Even SD is pretty good at avoiding that, with decent hardware and larger models (that need more memory).

      • bionicjoey@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 months ago

        When you process an image through the same pipeline multiple times, artifacts will appear and become amplified.

        • General_Effort@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          7 months ago

          What’s happening here is just nothing like that. There is no amplifier. Images aren’t run through a pipeline.

            • General_Effort@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              7 months ago

              Yes, but the model is the end of that pipeline. The image is not supposed to come out again. A model can “memorize” an image, but then you wouldn’t necessarily expect an amplification of artifacts. Image generators are not supposed to d lossy compression, though the tech could be used for that.

              • Grimy@lemmy.world
                link
                fedilink
                English
                arrow-up
                5
                ·
                edit-2
                7 months ago

                If the image has errors that are hard to spot by the human eye and the model gets trained on these images, thoses errors that came about naturally on real data get amplified.

                Its not a model killer but it is something to watch out for.

                • General_Effort@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  3
                  ·
                  7 months ago

                  Yes, if you want realism. But that’s just one of the things that people look for. Personal preference.

                  • SomeGuy69@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    5
                    ·
                    7 months ago

                    Invisible artifacts still cause result retardation, realistic or not. Like issue with fingers, shadows, eyes, colors etc.