‘Impossible’ to create AI tools like ChatGPT without copyrighted material, OpenAI says::Pressure grows on artificial intelligence firms over the content used to train their products

  • dhork@lemmy.world
    link
    fedilink
    English
    arrow-up
    88
    arrow-down
    8
    ·
    edit-2
    1 year ago

    ¿Porque no los dos?

    I don’t understand why people are defending AI companies sucking up all human knowledge by saying “well, yeah, copyrights are too long anyway”.

    Even if we went back to the pre-1976 term of 28 years, renewable once for a total of 56 years, there’s still a ton of recent works that AI are using without any compensation to their creators.

    I think it’s because people are taking this “intelligence” metaphor a bit too far and think if we restrict how the AI uses copyrighted works, that would restrict how humans use them too. But AI isn’t human, it’s just a glorified search engine. At least all standard search engines do is return a link to the actual content. These AI models chew up the content and spit out something based on it. It simply makes sense that this new process should be licensed separately, and I don’t care if it makes some AI companies go bankrupt. Maybe they can work adequate payment for content into their business model going forward.

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      7
      ·
      edit-2
      1 year ago

      It shouldn’t be cheap to absorb and regurgitate the works of humans the world over in an effort to replace those humans and subsequently enrich a handful of silicon valley people.

      Like, I don’t care what you think about copyright law and how corporations abuse it, AI itself is corporate abuse.

      And unlike copyright, which does serve its intended purpose of helping small time creators as much as it helps Disney, the true benefits of AI are overwhelmingly for corporations and investors. If our draconian copyright system is the best tool we have to combat that, good. It’s absolutely the lesser of the two evils.

      • lolcatnip@reddthat.com
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        5
        ·
        1 year ago

        Do you believe it’s reasonable, in general, to develop technology that has the potential to replace some human labor?

        Do you believe compensating copyright holders would benefit the individuals whose livelihood is at risk?

        the true benefits of AI are overwhelmingly for corporations and investors

        “True” is doing a lot of work here, I think. From my perspective the main beneficiaries of technology like LLMs and stable diffusion are people trying to do their work more efficiently, people paying around, and small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc. Maybe you’re talking about something different, like deep fakes? The downside of using a vague term like “AI” is that it’s too easy to accidently conflate things that have little in common.

        • EldritchFeminity@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          10
          ·
          1 year ago

          There’s 2 general groups when it comes to AI in my mind: Those whose work would benefit from the increased efficiency AI in various forms can bring, and those who want the rewards of work without putting in the effort of working.

          The former include people like artists who could do stuff like creating iterations of concept sketches before choosing one to use for a piece to make that part of their job easier/faster.

          Much of the opposition of AI comes from people worrying about/who have been harmed by the latter group. And it all comes down the way that the data sets are sourced.

          These are people who want to use the hard work of others for their own benefit, without giving them compensation; and the corporations fall pretty squarely into this group. As does your comment about “small-time creators who suddenly have custom graphics to illustrate their videos, articles, etc.” Before AI, they were free to hire an artist to do that for them. MidJourney, for example, falls into this same category - the developers were caught discussing various artists that they “launder through a fine tuned Codex” (their words, not mine, here for source) for prompts. If these sorts of generators were using opt-in data sets, paying licensing fees to the creators, or some other way to get permission to use their work, this tech could have tons of wonderful uses, like for those small-time creators. This is how music works. There are entire businesses that run on licensing copyright free music out to small-time creators for their videos and stuff, but they don’t go out recording bands and then splicing their songs up to create synthesizers to sell. They pay musicians to create those songs.

          Instead of doing what the guy behind IKEA did when he thought “people besides the rich deserve to be able to have furniture”, they’re cutting up Bob Ross paintings to sell as part of their collages to people who want to make art without having to actually learn how to make it or pay somebody to turn their idea into reality. Artists already struggle in a world that devalues creativity (I could make an entire rant on that, but the short is that the starving artist stereotype exists for a reason), and the way companies want to use AI like this is to turn the act of creating art into a commodity even more; to further divest the inherently human part of art from it. They don’t want to give people more time to create and think and enjoy life; they merely want to wring even more value out of them more efficiently. They want to take the writings of their journalists and use them to train the AI that they’re going to replace them with, like a video game journalism company did last fall with all of the writers they had on staff in their subsidiary companies. They think, “why keep 20 writers on staff when we can have a computer churn out articles for our 10 subsidiaries?” Last year, some guy took a screenshot of a piece of art that one of the artists for Genshin Impact was working on while livestreaming, ran it through some form of image generator, and then came back threatening to sue the artist for stealing his work.

          Copyright laws don’t favor the small guy, but they do help them protect their work as a byproduct of working for corporate interests. In the case of the Genshin artist, the fact that they were livestreaming their work and had undeniable, recorded proof that the work was theirs and not some rando in their stream meant that copyright law would’ve been on their side if it had actually gone anywhere rather than some asshole just being an asshole. Trademark isn’t quite the same, but I always love telling the story of the time my dad got a cease and desist letter from a company in another state for the name of a product his small business made. So he did some research, found out that they didn’t have the trademark for it in that state, got the trademark himself, and then sent them back their own letter with the names cut out and pasted in the opposite spots. He never heard from them again!

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      6
      ·
      1 year ago

      I don’t understand why people are defending AI companies

      Because it’s not just big companies that are affected; it’s the technology itself. People saying you can’t train a model on copyrighted works are essentially saying nobody can develop those kinds of models at all. A lot of people here are naturally opposed to the idea that the development of any useful technology should be effectively illegal.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        3
        ·
        1 year ago

        This is frankly very simple.

        • If the AI is trained on copyrighted material and doesn’t pay for it, then the model should be freely available for everyone to use.

        • If the AI is trained on copyrighted material and pays a license for it, then the company can charge people for using the model.

        If information should be free and copyright is stifling, then OpenAI shouldn’t be able to charge for access. If information is valuable and should be paid for, then OpenAI should have paid for the training material.

        OpenAI is trying to have it both ways. They don’t want to pay for information, but they want to charge for information. They can’t have one without the either.

      • BURN@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        2
        ·
        1 year ago

        You can make these models just fine using licensed data. So can any hobbyist.

        You just can’t steal other people’s creations to make your models.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          7
          ·
          1 year ago

          Of course it sounds bad when you using the word “steal”, but I’m far from convinced that training is theft, and using inflammatory language just makes me less inclined to listen to what you have to say.

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            2
            ·
            1 year ago

            Training is theft imo. You have to scrape and store the training data, which amounts to copyright violation based on replication. It’s an incredibly simple concept. The model isn’t the problem here, the training data is.

      • dhork@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        2
        ·
        1 year ago

        I am not saying you can’t train on copyrighted works at all, I am saying you can’t train on copyrighted works without permission. There are fair use exemptions for copyright, but training AI shouldn’t apply. AI companies will have to acknowledge this and get permission (probably by paying money) before incorporating content into their models. They’ll be able to afford it.

        • lolcatnip@reddthat.com
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          What if I do it myself? Do I still need to get permission? And if so, why should I?

          I don’t believe the legality of doing something should depend on who’s doing it.

          • BURN@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            Yes you would need permission. Just because you’re a hobbyist doesn’t mean you’re exempt from needing to follow the rules.

            As soon as it goes beyond a completely offline, personal, non-replicatible project, it should be subject to the same copyright laws.

            If you purely create a data agnostic AI model and share the code, there’s no problem, as you’re not profiting off of the training data. If you create an AI model that’s available for others to use, then you’d need to have the licensing rights to all of the training data.