While I am quite excited about the Walton Goggins-infused Amazon Fallout series, the show debuted some promo art for the project ahead of official stills or footage and…it appears to be AI generated.

  • Ghostalmedia@lemmy.world
    link
    fedilink
    English
    arrow-up
    120
    arrow-down
    1
    ·
    1 year ago

    My guess is that AI’s first big victim for graphic design will be stock art. Previously, crap like that background asset would just be stock purchased from Getty or Adobe stock. Now it can be generated.

    I’m already starting to use it instead of paying for bullshit licenses.

    • iforgotmyinstance@lemmy.world
      link
      fedilink
      English
      arrow-up
      43
      arrow-down
      12
      ·
      edit-2
      1 year ago

      I’ve been using AI for school and work, as God intended: give it the raw, have it do the grunt organization work, and then proofread to correct anything.

      There is very little to say that hasn’t been said. For an example of our limitations as humans, there’s only 50ish unique plot lines in the English language. To expect each person to be completely original is asinine.

      It’s a tool, one of many in my toolbox. People who are just flat against any and all AI or LLMs are behind the curve.

      • soulfirethewolf@lemdro.id
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        6
        ·
        1 year ago

        Pretty much.

        People very frequently complain about AI taking the jobs of artists. But if the money was never actually going to be put on the table for artists to claim, I really don’t think that was going to help much.

        That doesn’t mean I hate artists what do, absolutely not. It’s just that artists are people and people are limited in how much they can do at any single time.

        For the past couple of months. I’ve currently been waiting on multiple artists to finish up their commission queue. And one of which I’m worried I’ll have to turn away because of a variety of life changes in my life that’s led me to losing my job and me having reduced income.

        As of right now, the costs of generating a picture with a tool like Stable Diffusion or DALL-E has been pretty low, the former even being free if you have the right hardware. And these systems manage to be almost always available, as well as being capable of working in a matter of seconds.

        Of course, that doesn’t change the fact that these tools are only good at painting the bigger picture. They have a tendency to choke on the smaller details. And I would personally rather wait for an actual person to be available to work on something original that’s also capable of filling a niche that AI models have yet to be trained on.

        • niisyth@lemmy.ca
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          5
          ·
          1 year ago

          This entirely disregards the fact that the training of these models was done on human artists’ work without consent or renumeration. As it is, it is not “AI”, It is just a glorified plagiarism machine. Not to say it isn’t impressive, but it has already stolen work already done by artists and further stealing upcoming work by mashing together older works.

          There’s ways to do it ethically by training on artwork with permission kind of like how Adobe is doing it, but that isn’t going to have as wide of a reach as the other free ones.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            7
            arrow-down
            11
            ·
            1 year ago

            but it has already stolen work already done by artists and further stealing upcoming work by mashing together older works.

            You keep using that word “stolen”, I do not think it means what you think it means.

            Also, AIs do not “mash together” works from their training sets. This is a very common and very incorrect conception of how they work. They are not collage generators or copy-and-paste machines. They learn concepts from the images they train on, they don’t actually remember fragments of those images to later regurgitate in some sort of patched-together Frankenstein’s Monster.

            • Send_me_nude_girls@feddit.de
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              3
              ·
              1 year ago

              You’re correct but it’s still too early and most people haven’t spend enough time with AI to fully understand. Maybe they never will.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                4
                arrow-down
                3
                ·
                1 year ago

                Like the classic quote says, it is difficult to get a man to understand something when his salary depends upon his not understanding it.

            • Pandemanium@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 year ago

              I just asked Wombo Dream to make the Mona Lisa and it did. Sure, you can tell it’s not exactly the real thing, but I don’t know how you can say it didn’t copy any of the actual Mona Lisa original.

              • FaceDeer@kbin.social
                link
                fedilink
                arrow-up
                2
                ·
                edit-2
                1 year ago

                I considered including mention of overfitting in my earlier comment, but since it’s such an edge case I felt it would just be an irrelevant digression.

                When a particular image has a great many duplicates in the training set - hundreds or even thousands of copies are necessary - then you get the phenomenon of overfitting. In that case you do get this sort of “memorization” of a particular image, because during training you are hitting the neural net over and over with the exact same inputs and really drilling it into them. This is universally considered undesirable, because there’s no point to it - why spend thousands of dollars to do something that a copy/paste command could do so much better and more easily? So when image generators are trained the training data goes through a “de-duplication” step intended to try to prevent this sort of thing from happening. Images like the Mona Lisa are so incredibly common that they still slip through the cracks, though.

                There’s a paper from some months back that commonly comes up when people want to go “aha, generative AI copies its training data!” But in reality this paper shows just how difficult it is to arrange for overfitting to happen. The researchers used an older version of Stable Diffusion whose training set was not well curated and is no longer used due to its poor quality, and even then it took them hundreds of millions of attempts to find just a handful of images from the training set that they could dredge back out of it in recognizable form.

              • emeralddawn45@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 year ago

                People have also copied art for as long as art has existed. You can buy a copy of the Mona Lisa in the gift shop, or print your own. That’s why the market for art has always been hyperfocus3d on ‘originals’. But rarely are the artists the ones getting rich off their art, especially now. I hate capitalism as much as anyone but if your motivation for making art is money you’re in the wrong business and your art probably isn’t that good anyway.

    • coffeebiscuit@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      5
      ·
      1 year ago

      Graphic designers aren’t the first. Automation ended a lot of jobs for decades. Ai is just a form of automation.

    • mosiacmango@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      7
      ·
      edit-2
      1 year ago

      The fun part here though is they dont have copyright on that art. If any of the “stock AI footage” becomes iconic, its public domain.

      Dicey spot for a studio to be in, but it does save some bucks, so they are plowing ahead.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        27
        arrow-down
        3
        ·
        1 year ago

        You should consult with a lawyer first. The amount of misinformation circulating on the Internet about how AI art is all public domain is enormous. That recent court case (Thaler v. Perlmutter) that made the rounds just recently, for example, does not say what most people seemed to be eagerly assuming it said.

        • affiliate@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          im also someone who has been misinformed on the AI art copyright status. could you explain how it actually works or link to a resource that does? i tried searching around for a bit but couldn’t find a clear consensus on it.

        • Xartle@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It will be really interesting to see how the case law develops. Personally, I am more interested in things on the IP side. A lot of lawyers I work with currently view LLMs like a shredder in front of a leaf blower. Which, it kind of is.

      • Balios@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        Neither do they have copyright of the stock art they used to purchase. The complete piece, however, including pip boy, is not AI generated. Someone put this together, put effort into it, which easily qualifies it for copyright protection, even if the background is AI generated instead of bought stock art.

      • AEsheron@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        If you’re talking about that recent legal case, look again. The artist made the claim that the AI was the sole author, but that he should own the IP. I think the vast majority of people would claim that, in it’s current state, the AI is a digital tool an author uses to make art. The recent ruling just reconfirm that A machines aren’t people, and B you can’t just own another author’s work.

    • jimmux@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 year ago

      They will be generating it themselves soon enough. I contributed some stock photos in the past. They recently sent me info about their new contribution pipeline, for content that may not pass the usual quality threshold, but will help train the models. If they do it right, who knows, maybe they can get better results worth paying for.