The New York Times is suing OpenAI and Microsoft for copyright infringement, claiming the two companies built their AI models by “copying and using millions” of the publication’s articles and now “directly compete” with its content as a result.

As outlined in the lawsuit, the Times alleges OpenAI and Microsoft’s large language models (LLMs), which power ChatGPT and Copilot, “can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style.” This “undermine[s] and damage[s]” the Times’ relationship with readers, the outlet alleges, while also depriving it of “subscription, licensing, advertising, and affiliate revenue.”

The complaint also argues that these AI models “threaten high-quality journalism” by hurting the ability of news outlets to protect and monetize content. “Through Microsoft’s Bing Chat (recently rebranded as “Copilot”) and OpenAI’s ChatGPT, Defendants seek to free-ride on The Times’s massive investment in its journalism by using it to build substitutive products without permission or payment,” the lawsuit states.

The full text of the lawsuit can be found here

  • HarkMahlberg@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    I guarantee you OpenAI and others didn’t even buy a lot of the material they use to train the AI models on.

    My hunch is that if they did actually buy or properly license that material, they would have been bankrupt before the first version of ChatGPT came online. And if that’s true, then OpenAI owes it’s entire existence to it’s piracy.

    • CJOtheReal@ani.social
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Its not piracy to just webscrap everything for data…

      There isn’t a person sitting around and pirating shit, its a Algorithm that takes everything from the internet it can reach.

      • HarkMahlberg@kbin.social
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        Yeah… That’s not a good defense if you think about it. If someone made a Reddit comment with the entire contents of Discworld (idk, just an example), and OpenAI scraped all of Reddit to train their model, well now they’ve used copyrighted material without paying for a commercial license, and now they’re on the hook. By being unscrupulous about their scraping, they actually open themselves up to more liability than if they were more careful about what they scrape and where.

        This is all to say nothing of the fact that several other major companies were caught pants down by training with databases explicitly created by torrenting a ton of books.

        https://torrentfreak.com/authors-accuse-openai-of-using-pirate-sites-to-train-chatgpt-230630/

        There is no direct evidence that OpenAI used pirate sites to train ChatGPT. That said, it is no secret that some AI projects have trained on pirated material in the past, as an excellent summary from Search Engine Journal highlights.

        The mainstream media has picked up this issue too. The Washington Post previously reported that the “C4 data set,” which Google and Facebook used to train their AI models, included Z-Library and various other pirate sites.

        • PlasterAnalyst@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          If I read an article and then I reference it or summarize it myself, that isn’t copyright infringement. There’s no difference if I have a computer do the work for me. It’s fair use.

        • CJOtheReal@ani.social
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Everyone accuses Open AI of everything. In the end most stuff they do will not be illigal, there are loads of reasons, mainly due to the technical issues involved. You would need a database of every copyrighted stuff to check anything. The computing power requiref for this would be absurdly high.

          The demands are idiotic and ridiculous.

          And as said they didn’t “train chat GPT on a piracy site” the scraping algorithm put some stuff form there in the training data. There is no person doing that.

          • HarkMahlberg@kbin.social
            link
            fedilink
            arrow-up
            3
            ·
            1 year ago

            There is no person doing that.

            “No one’s responsible, the DAO did it. No humans are liable, just this amorphous, sentient carbon cloud.”

            I’ve heard many defenses of AI, some of which I agree with, but “strip mining content off the internet is fine because it’s automated” is easily one of the weakest. It doesn’t pass the sniff test.

            If you write a script that downloads every single image from every single website, no questions asked, and then reupload them to various websites at random, do you suppose the police shouldn’t charge you with (inevitably) possessing and distributing CSAM? “Oh no officer, your true culprit is the Dell in my living room! Arrest that box!”

            Everyone is, on some level, responsible for the things they create.

          • EvilMonkeySlayer@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            And as said they didn’t “train chat GPT on a piracy site” the scraping algorithm put some stuff form there in the training data. There is no person doing that.

            “Your honour my program that I created to slurp up data from the internet using my paid for internet connection, into my AI trained model that I own and control happened to slurp up copyrighted data… I um, it’s not my fault it slurped up copyrighted data even though I put no checks in place for it to check what it was slurping up or from where.”

            That is the argument you are putting forth.

            Do you think any judge/court of law would view that favourably?