• Fisk400@feddit.nu
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 months ago

    They know what they fed the thing. Not backing up their own training data would be insane. They are not insane, just thieves

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 months ago

      Everyone says this but the truth is copyright law has been unfit for purpose for well over 30 years now. And the lords were written no one expected something like the internet to ever come along and they certainly didn’t expect something like AI. We can’t just keep applying the same old copyright laws to new situations when they already don’t work.

      I’m sure they did illegally obtain the work but is that necessarily a bad thing? For example they’re not actually making that content available to anyone so if I pirate a movie and then only I watch it, I don’t think anyone would really think I should be arrested for that, so why is it unacceptable for them but fine for me?

      • A_Very_Big_Fan@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        if I pirate a movie and then only I watch it, I don’t think anyone would really think I should be arrested for that, so why is it unacceptable for them but fine for me?

        Because it’s more analogous to watching a video being broadcasted outdoors in the public, or looking at a mural someone painted on a wall, and letting it inform your creative works going forward. Not even recording it, just looking at it.

        As far as we know, they never pirated anything. What we do know is it was trained on data that literally anybody can go out and look at yourself and have it inform your own work. If they’re out here torrenting a bunch of movies they don’t own or aren’t licencing, then the argument against them has merit. But until then, I think all of this is a bunch of AI hysteria over some shit humans have been doing since the first human created a thing.

        • StarPupil@ttrpg.network
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 months ago

          An AI (in its current form) isn’t a person drawing inspiration from the world around it, it’s a program made by people with inputs chosen by those people. If those people didn’t ask permission to use other people’s licensed work for their product, then they are plagiarising that work, and they should be subject to the same penalties that, for example, a game company using stolen art in their game should face. An AI doesn’t become inspired, it copies existing things to predict what it thinks its user wants to see. If we produce a real thinking AI at some point in the future, one with self determination and whatnot, the story will be different, but for now it isn’t.

          • A_Very_Big_Fan@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            What is web scraping if not gathering information from around the world? As long as you’re not distributing copyrighted content (and the models in question here don’t, btw), then fair use is at play. I’m not plagiarizing the news by reading it or by talking about what I learned, but I would be if I just copy/pasted my response from the article.

            Reading publicly available data isn’t a copyright violation, and it certainly isn’t a violation of fair use. If it were, then you just plagiarized my comment by reading it before you responded.

      • oKtosiTe@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        if I pirate a movie and then only I watch it, I don’t think anyone would really think I should be arrested for that

        There are definitely people out there that think you should be arrested for that.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Even the police are unsure if it’s actually a crime though. Crimes require someone to lose something and no one can point to a lost product so it’s difficult to really quantify.

          And it’s not even technically breach of copyright since you’re not selling it.

          • exanime@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            9 months ago

            But they ARE selling it … Every answer Chat GPT makes came from possibly stolen material

            • confusedbytheBasics@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              You’re using the word ‘stolen’ which doesn’t fit. It would be accurate to say 'every answer comes from possibly unlicensed material '.

            • BoscoBear@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Isn’t that true of every opinion you have. All the knowledge you have is based on works of others that came before you.

              • exanime@lemmy.today
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                9 months ago

                Not untill I bill you for it

                Also, no there is such a thing as an original thought or opinion… Even if it’s informed on other knowledge

                There is a difference between reinterpreting other knowledge and just Frankensteining multiple work together

                • BoscoBear@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  I don’t know enough about LLMs but Neural networks are capable of original thought. I suspect LLMs are too because of their relationship to Neural Networks.

      • rottingleaf@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        That is a bad thing if they want to be exempt from the law because they are doing a big, very important thing, and we shouldn’t.

        The copyright laws are shit, but applying them selectively is orders of magnitude worse.

      • exanime@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        9 months ago

        Because the actual comparison is that you stole ALL movies, started your own Netflix with them and are lining up to literally make billions by taking the jobs of millions of people, including those you stole from

        • BoscoBear@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I would say it is closer to watching all the movies, regardless of how you got them, then taught a film class at UCLA.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          9 months ago

          If I paint a melty clock hanging off of a table, how have I stolen from Salvador Dali? What did I “steal” from Tolkien when I drew this?

          you stole ALL movies, started your own Netflix with them

          The model in question can’t even try to distribute copyrighted material. You could have easily checked for yourself, but once again I find myself having to do the footwork for you guys.

          • exanime@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            9 months ago

            If you sell your melty clock yes, it not “stealing” but you are violating copyright, that’s how it works

            The “model in question” is a bit of a prototype, I thought is was clear we are talking about where these models are going… Maybe you’d get it if you came down of your high horse

            • A_Very_Big_Fan@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Dali doesn’t own the concept of a melting clock. If I include a melting clock in my own work, as long as it’s not his melting clock with all the other elements of his painting, it’s fair use.

              GPT hasn’t been a prototype since before 2018, and the copyright restrictions are only getting tighter every time it’s updated so idk what you’re on about.

      • GiveMemes@jlai.lu
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        2
        ·
        9 months ago

        Ok but training an ai is not equivalent to watching a movie. It’s more like putting a game on one of those 300 games in one DS cartridges back in the day.

    • VirtualOdour@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      9 months ago

      That’s really not how it works though, it’s a web crawler they’re not going to download the whole internet

      And a reason they don’t is it would actually potentially be copywrite infringement in some cases where as what they do legally isn’t (no matter how much people wish the law was set based on their emotions)

  • Buttons@programming.dev
    link
    fedilink
    English
    arrow-up
    2
    ·
    9 months ago

    If I were the reporter my next question would be:

    “Do you feel that not knowing the most basic things about your product reflects on your competence as CTO?”

    • RatBin@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Also about this line:

      Others, meanwhile, jumped to Murati’s defense, arguing that if you’ve ever published anything to the internet, you should be perfectly fine with AI companies gobbling it up.

      No I am not fine. When I wrote that stuff and those researches in old phpbb forums I did not do it with the knowledge of a future machine learning system eating it up without my consent. I never gave consent for that despite it being publicly available, because this would be a designation of use that wouldn’t exist back than. Many other things are also publicly available, but some a re copyrighted, on the same basis: you can publish and share content upon conditions that are defined by the creator of the content. What’s that, when I use zlibrary I am evil for pirating content but openai can do it just fine due to their huge wallets? Guess what, this will eventually creating a crisis of trust, a tragedy of the commons if you will when enough ai generated content will build the bulk of your future Internet search! Do we even want this?

    • ForgotAboutDre@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Hilarious, but if the reporter asked this they would find it harder to get invites to events. Which is a problem for journalists. Unless your very well regarded for your journalism, you can’t push powerful people without risking your career.

      • Abnorc@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        That, and the reporter is there to get information, not mess with and judge people. Asking that sort of question is really just an attack. We can leave it to commentators and ourselves for judge people.

        • Aniki 🌱🌿@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          edit-2
          9 months ago

          this is limp dick energy. If asking questions is an attack then you’re probably a piece of shit doing bad things.

          • Abnorc@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            9 months ago

            Think about the answer you would actually get. They would dismiss the question or give some sort of nonsense answer. It’s a rhetorical question, and the only thing that it serves to do is criticize the person being asked. That’s not what reporters are there to do. If the answer would actually give some useful information to the reader, then it’s worth asking.

      • Aniki 🌱🌿@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        9 months ago

        boofuckingwoo. Reporters are not supposed to be friends with the people they are writing about.

        • tb_@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          True, but if those same people they’re not supposed to be friends with are the ones inviting them to those events/granting them early access…

          In other words: the system is rigged.

          • Aniki 🌱🌿@lemm.ee
            link
            fedilink
            English
            arrow-up
            0
            ·
            9 months ago

            Again - boofuckinghooo. Let the fuckers have no friends in the media. The media owners make journalists spinless advertisement sellers. I have very little respect for the profession at this point.

              • Deceptichum@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                edit-2
                9 months ago

                booduckinghoo.

                We’re sick and tired of this shit, it will never change if people make excuses for it.

            • MalachaiConstant@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              You’re missing the point that they need those relationships to gain access to sources. You literally cannot force people to talk to you

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      9 months ago

      Every video ever created is copyrighted.

      The question is — do they need a license? Time will tell. This is obviously going to court.

      • Kazumara@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Don’t downvote this guy. He’s mostly right. Creative works have copyright protections from the moment they are created. The relevant question is indeed if they have the relevant permissions for their use, not wether it had protections in the first place.

        Maybe some surveillance camera footage is not sufficiently creative to get protections, but that’s hardly going to be good for machine reinforcement learning.

    • VirtualOdour@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      9 months ago

      It’s a question that is based on a purposeful misunderstanding of the technology, it’s like expecting a bee keeper to know each bees name and bedtime. Really it’s like asking a bricklayer where each brick came from in the pile, He can tell you the batch but not going to know this brick came from the forth row of the sixth pallet, two from the left. There is no reason to remember that it’s not important to anyone.

      The don’t log it because it would take huge amounts of resources and gain nothing.

      • zaphod@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        9 months ago

        What?

        Compiling quality datasets is enormously challenging and labour intensive. OpenAI absolutely knows the provenance of the data they train on as it’s part of their secret sauce. And there’s no damn way their CTO won’t have a broad strokes understanding of the origins of those datasets.

    • Bogasse@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      9 months ago

      And on the other hand it is a very obvious question to expect. If you have something hide how on the world are you not prepared for this question !? 🤡

  • RatBin@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    Obviously nobody fully knows where so much training data come from. They used Web scraping tool like there’s no tomorrow before, with that amount if informations you can’t tell where all the training material come from. Which doesn’t mean that the tool is unreliable, but that we don’t truly why it’s that good, unless you can somehow access all the layers of the digital brains operating these machines; that isn’t doable in closed source model so we can only speculate. This is what is called a black box and we use this because we trust the output enough to do it. Knowing in details the process behind each query would thus be taxing. Anyway…I’m starting to see more and more ai generated content, YouTube is slowly but surely losing significance and importance as I don’t search informations there any longer, ai being one of the reasons for this.

  • dezmd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    9 months ago

    LLM is just another iteration of Search. Search engines do the same thing. Do we outlaw search engines?

    • AliasAKA@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      SoRA is a generative video model, not exactly a large language model.

      But to answer your question: if all LLMs did was redirect you to where the content was hosted, then it would be a search engine. But instead they reproduce what someone else was hosting, which may include copyrighted material. So they’re fundamentally different from a simple search engine. They don’t direct you to the source, they reproduce a facsimile of the source material without acknowledging or directing you to it. SoRA is similar. It produces video content, but it doesn’t redirect you to finding similar video content that it is reproducing from. And we can argue about how close something needs to be to an existing artwork to count as a reproduction, but I think for AI models we should enforce citation models.

      • dezmd@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        How does a search engine know where to point you? It injests all that data and processes it ‘locally’ on the search engines systems using algorithms to organize the data for search. It’s effectively the same dataset.

        LLM is absolutely another iteration of Search, with natural language ouput for the same input data. Are you advocating against search engine data injest as not fair use and copyright violations as well?

        You equate LLM to Intelligence which it is not. It is algorithmic search interation with natural language responses, but that doesn’t sound as cool as AI. It’s neat, it’s useful, and yes, it should cite the sourcing details (upon request), but it’s not (yet?) a real intelligence and is equal to search in terms of fair use and copyright arguments.

        • AliasAKA@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          I never equated LLMs to intelligence. And indexing the data is not the same as reproducing the webpage or the content on a webpage. For you to get beyond a small snippet that held your query when you search, you have to follow a link to the source material. Now of course Google doesn’t like this, so they did that stupid amp thing, which has its own issues and I disagree with amp as a general rule as well. So, LLMs can look at the data, I just don’t think they can reproduce that data without attribution (or payment to the original creator). Perplexity.ai is a little better in this regard because it does link back to sources and is attempting to be a search engine like entity. But OpenAI is not in almost all cases.

    • dantheclamman@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I feel conflicted about the whole thing. Technically it’s a model. I don’t feel that people should be able to sue me as a scientist for making a model based on publicly available data. I myself am merely trying to use the model itself to explain stuff about the world. But OpenAI are also selling access to the outputs of the model, that can very closely approximate the intellectual property of people. Also, most of the training data was accessed via scraping and other gray market methods that were often explicitly violating the TOU of the various places they scraped from. So it all is very difficult to sort through ethically.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    This is the best summary I could come up with:


    Mira Murati, OpenAI’s longtime chief technology officer, sat down with The Wall Street Journal’s Joanna Stern this week to discuss Sora, the company’s forthcoming video-generating AI.

    It’s a bad look all around for OpenAI, which has drawn wide controversy — not to mention multiple copyright lawsuits, including one from The New York Times — for its data-scraping practices.

    After the interview, Murati reportedly confirmed to the WSJ that Shutterstock videos were indeed included in Sora’s training set.

    But when you consider the vastness of video content across the web, any clips available to OpenAI through Shutterstock are likely only a small drop in the Sora training data pond.

    Others, meanwhile, jumped to Murati’s defense, arguing that if you’ve ever published anything to the internet, you should be perfectly fine with AI companies gobbling it up.

    Whether Murati was keeping things close to the vest to avoid more copyright litigation or simply just didn’t know the answer, people have good reason to wonder where AI data — be it “publicly available and licensed” or not — is coming from.


    The original article contains 667 words, the summary contains 178 words. Saved 73%. I’m a bot and I’m open source!

    • A_Very_Big_Fan@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      9 months ago

      Funny how we have all this pissing and moaning about stealing, yet nobody ever complains about this bot actually lifting entire articles and spitting them back out without ads or fluff. I guess it’s different when you find it useful, huh?

      I like the bot, but I mean y’all wanna talk about copyright violations? The argument against this bot is a hell of a lot more solid than just using data for training.

      • Guntrigger@feddit.ch
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        Is this bot a closed system which is being used for profit? No, you know exactly what its source is (the single article it is condensing) and even has a handy link about how it is open source at the end of every single post.

        • A_Very_Big_Fan@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 months ago

          It copied all of its text from the article, and it allows me to get all the information from it I want without providing that publisher with traffic or ad revenue. That’s not fair use.

          I do like the bot, and personally I’d rather it stay, but no matter how you look at it this isn’t “fair use” of the article.

          • Guntrigger@feddit.ch
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            9 months ago

            Interesting take. In all of the defences of LLMs using copyrighted material it’s very often highlighted that “fair use” allows exactly such summaries of larger texts.

            In reality, “fair use” is ruled on a case by case basis, so it’s impossible to judge whether something is or not without it going to court.

            • A_Very_Big_Fan@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              We’re not making legislation here, so we don’t have that level of burden of proof. But either way, when it comes to factors of fair use that every authority on the matter will list, it violates almost all of them.

              It’s non-commercial, and it’s using facts rather than using a more creative work, so it’s got that going for it… But it’s

              • composed of 100% copied material

              • it’s not transformative

              • it’s substituting the original work

              • it uses officially published work

              • it specifically copies the “heart” of the work

              • it bypasses all of the ads and impacts their traffic/metrics so it has a financial impact on them.

              It’s pretty obvious that there is no argument here. The factors that are violated the hardest and most undisputably are the ones that most authorities on the matter (including the one I linked) agree are the most important.

    • jaemo@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      It also tells us how hypocritical we all are since absolutely every single one of us would make the same decisions they have if we were in their shoes. This shit was one bajillion percent inevitable; we are in a river and have been since we tilled soil with a plough in the Nile valley millennia ago.

      • whoisearth@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        9 months ago

        Speak for yourself. Were I in their shoes no I would not. But then again my company wouldn’t be as big as theirs for that reason.

      • adrian783@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        9 months ago

        most of us would never be in their shoes because most of us are not sociopathic techbros

  • Gakomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    9 months ago

    Any company CEO does not know shit that goes on in the dev department so her answer does not surprise me, ask the Devs or the team leader in charge of the project. The CEO is only there to make sure the company makes money as he and the share holders only care about money!

      • Gakomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        She should but she does not as I mention in another post anyone at team leader or above in all the companies that I work so far bearly had any technical skill and didn’t have any idea about this shit, only some bits and pieces that they got through some documentation that the dev team made. They had some vague idea of how our infrastructure works but that about it.

      • sunbeam60@lemmy.one
        link
        fedilink
        English
        arrow-up
        0
        ·
        9 months ago

        She knows the answer. She doesn’t the legal status of the answer, so she blanks. Been there before, I’ve got some sympathy for being in the limelight and being asked a tough question.

        As my media trainer said, if you aren’t willing to discuss a subject, make it a condition of the interview. Once the camera rolls, declining to answer seems incredibly suspect.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    9 months ago

    Then wipe it out and start again once you have where your data is coming from sorted out. Are we acting like you having built datacenter pack full of NVIDIA processors just for this sort of retraining? They are choosing to build AI without proper sourcing, that’s not an AI limitation.

    • BoscoBear@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      I don’t think so. They aren’t reproducing the content.

      I think the equivalent is you reading this article, then answering questions about it.

      • A_Very_Big_Fan@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Idk why this is such an unpopular opinion. I don’t need permission from an author to talk about their book, or permission from a singer to parody their song. I’ve never heard any good arguments for why it’s a crime to automate these things.

        I mean hell, we have an LLM bot in this comment section that took the article and spat 27% of it back out verbatim, yet nobody is pissing and moaning about it “stealing” the article.

        • MostlyGibberish@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Because people are afraid of things they don’t understand. AI is a very new and very powerful technology, so people are going to see what they want to see from it. Of course, it doesn’t help that a lot of people see “a shit load of cash” from it, so companies want to shove it into anything and everything.

          AI models are rapidly becoming more advanced, and some of the new models are showing sparks of metacognition. Calling that “plagiarism” is being willfully ignorant of its capabilities, and it’s just not productive to the conversation.

          • A_Very_Big_Fan@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            True

            Of course, it doesn’t help that a lot of people see “a shit load of cash” from it, so companies want to shove it into anything and everything.

            And on a similar note to this, I think a lot of what it is is that OpenAI is profiting off of it and went closed-source. Lemmy being a largely anti-capitalist and pro-open-source group of communities, it’s natural to have a negative gut reaction to what’s going on, but not a single person here, nor any of my friends that accuse them of “stealing” can tell me what is being stolen, or how it’s different from me looking at art and then making my own.

            Like, I get that the technology is gonna be annoying and even dangerous sometimes, but maybe let’s criticize it for that instead of shit that it’s not doing.

            • Mnemnosyne@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              One problem is people see those whose work may no longer be needed or as profitable, and…they rush to defend it, even if those same people claim to be opposed to capitalism.

              They need to go ‘yes, this will replace many artists and writers…and that’s a good thing because it gives everyone access to being able to create bespoke art for themselves.’ but at the same time realize that while this is a good thing, it also means the need for societal shift to support people outside of capitalism is needed.

              • MostlyGibberish@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                it also means the need for societal shift to support people outside of capitalism is needed.

                Exactly. This is why I think arguing about whether AI is stealing content from human artists isn’t productive. There’s no logical argument you can really make that a theft is happening. It’s a foregone conclusion.

                Instead, we need to start thinking about what a world looks like where a large portion of commercially viable art doesn’t require a human to make it. Or, for that matter, what does a world look like where most jobs don’t require a human to do them? There are so many more pressing and more interesting conversations we could be having about AI, but instead we keep circling around this fundamental misunderstanding of what the technology is.

            • MostlyGibberish@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              I can definitely see why OpenAI is controversial. I don’t think you can argue that they didn’t do an immediate heel turn on their mission statement once they realized how much money they could make. But they’re not the only player in town. There are many open source models out there that can be run by anyone on varying levels of hardware.

              As far as “stealing,” I feel like people imagine GPT sitting on top of this massive collection of data and acting like a glorified search engine, just sifting through that data and handing you stuff it found that sounds like what you want, which isn’t the case. The real process is, intentionally, similar to how humans learn things. So, if you ask it for something that it’s seen before, especially if it’s seen it many times, it’s going to know what you’re talking about, even if it doesn’t have access to the real thing. That, combined with the fact that the models are trained to be as helpful as they possibly can be, means that if you tell it to plagiarize something, intentionally or not, it probably will. But, if we condemned any tool that’s capable of plagiarism without acknowledging that they’re also helpful in the creation process, we’d still be living in caves drawing stick figures on the walls.

      • ...m...@ttrpg.network
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        …with the prevalence of clickbaity bottom-feeder news sites out there, i’ve learned to avoid TFAs and await user summaries instead…

        (clicks through)

        …yep, seven nine ads plus another pop-over, about 15% of window real estate dedicated to the actual story…

        • neptune@dmv.social
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          9 months ago

          The issue is that the LLMs do often just verbatim spit out things they plagiarized form other sources. The deeper issue is that even if/when they stop that from happening, the technology is clearly going to make most people agree our current copyright laws are insufficient for the times.

            • neptune@dmv.social
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              2
              ·
              9 months ago

              That’s one example, plus I’m talking generally why this is an important question for a CEO to answer and why people think generally LLMs may infringe on copyright, be bad for creative people

              • A_Very_Big_Fan@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                9 months ago

                I’m talking generally why this is an important question for a CEO to answer …

                Right, which your only evidence for is “LLMs do often just verbatim spit out things they plagiarized form other sources” and that they aren’t trying to prevent this from happening.

                Which is demonstrably false, and I’ll demonstrate it with as many screenshots/examples you want. You’re just wrong about that (at least about GPT). You can also demonstrate it yourself, and if you can prove me wrong I’ll eat my shoe.

      • Linkerbaan@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        2
        ·
        9 months ago

        Actually neural networks verbatim reproduce this kind of content when you ask the right question such as “finish this book” and the creator doesn’t censor it out well.

        It uses an encoded version of the source material to create “new” material.

        • BoscoBear@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Sure, if that is what the network has been trained to do, just like a librarian will if that is how they have been trained.

          • Linkerbaan@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            3
            ·
            edit-2
            9 months ago

            Actually it’s the opposite, you need to train a network not to reveal its training data.

            “Using only $200 USD worth of queries to ChatGPT (gpt-3.5- turbo), we are able to extract over 10,000 unique verbatim memorized training examples,” the researchers wrote in their paper, which was published online to the arXiv preprint server on Tuesday. “Our extrapolation to larger budgets (see below) suggests that dedicated adversaries could extract far more data.”

            The memorized data extracted by the researchers included academic papers and boilerplate text from websites, but also personal information from dozens of real individuals. “In total, 16.9% of generations we tested contained memorized PII [Personally Identifying Information], and 85.8% of generations that contained potential PII were actual PII.” The researchers confirmed the information is authentic by compiling their own dataset of text pulled from the internet.

            • BoscoBear@lemmy.sdf.org
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              Interesting article. It seems to be about a bug, not a designed behavior. It also says it exposes random excerpts from books and other training data.

              • Linkerbaan@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                3
                ·
                9 months ago

                It’s not designed to do that because they don’t want to reveal the training data. But factually all neural networks are a combination of their training data encoded into neurons.

                When given the right prompt (or image generation question) they will exactly replicate it. Because that’s how they have been trained in the first place. Replicating their source images with as little neurons as possible, and tweaking them when it’s not correct.

                • BoscoBear@lemmy.sdf.org
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  9 months ago

                  That is a little like saying every photograph is a copy of the thing. That is just factually incorrect. I have many three layer networks that are not the thing they were trained on. As a compression method they can be very lossy and in fact that is often the point.