OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

  • Hello_there@kbin.social
    link
    fedilink
    arrow-up
    18
    arrow-down
    5
    ·
    1 year ago

    I doubt they did the ‘rewrote this text like this’ prompt you state. This would just come out in any trial if it was that simple and would be a giant black mark on the paper for filing a frivolous lawsuit.

    If we rule that out, then it means that gpt had article text in its knowledge base, and nyt was able to get it to copy that text out in its response.
    Even that is problematic. Either gpt does this a lot and usually rewrites it better, or it does that sometimes. Both are copyright offenses.

    Nyt has copyright over its article text, and they didn’t give license to gpt to reproduce it. Even if they had to coax the text out thru lots of prompts and creative trial and error, it still stands that gpt copied text and reproduced it and made money off that act without the agreement of the rights holder.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      5
      ·
      1 year ago

      They have copyright over their article text, but they don’t have copyright over rewordings of their articles.

      It doesn’t seem so cut and dry to me, because “someone read my article, and then I asked them to write an article on the same topic, and for each part that was different I asked them to change it until it was the same” doesn’t feel like infringement to me.

      I suppose I want to see the actual prompts to have a better idea.

      • Hello_there@kbin.social
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        1 year ago

        I can take the entirety of Harry Potter, run it thru chat gpt to ‘rewrite in the style of Lord of the rings’, and rename the characters. Assuming it all works correctly, everything should be reworded. But, I would get deservedly sued into the ground.
        News articles might be a different subject matter, but a blatant rewording of each sentence, line by line, still seems like a valid copyright claim.
        You have to add context or nuance or use multiple sources. Some kind of original thought. You can’t just wholly repackage someone else’s work and profit off of that.

        • ricecake@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          1 year ago

          But that’s not what LLMs do. They don’t just reword stuff like the search and replace feature in word, it’s closer to “a sentence with the same meaning”.

          I’d agree it’s a lot more murky when it’s the plot that’s your IP, and not just the actual written wordsand editorial perspective, like a news article.

          I think it’s also a question of if it’s copyright infringement for the tool to pull in the data and process it, or if it’s infringement when you actually use it to make the infringing content.