AI and legal experts told the FT this “memorization” ability could have serious ramifications on AI groups’ battle against dozens of copyright lawsuits around the world, as it undermines their core defense that LLMs “learn” from copyrighted works but do not store copies.
Sam Altman would like to remind you each Old Lady at a Library consume 284 cubic feet of Oxygen a day from the air.
Also, hey at least they made sure to probably destroy the physical copy they ripped into their hopelessly fragmented CorpoNapster fever dream, the law is the law.



Ugh, not more apologia for the LLM assholes.
First of all, this is not what they did:
They did this:
And the LLMs spat out, “say that they were perfectly normal, thankyou very much.”
They then simply prompted “Continue”, and the LLMs continued the story until guard rails hit and they refused to continue, or there was a stop phrase like “The end”, in some cases with 95.8% accuracy.
Can you prove this premise? Because without it your entire defense falls apart.
Isn’t it weird that Anthropic nor Microsoft nor Meta nor X nor OpenAI (nor any other big LLM player) have funded what would be very cheap studies to prove this premise, in the light of the many multibillion dollar lawsuits they’re on the docket for. They are not strapped for cash nor any other resource.
Memorization is a very real LLM problem and this outcome is even surprising experts, whom very much know how LLMs work.
It also flatly ignores that this is a known problem for the commercial LLMs, which is why they specifically put in guardrails to try to prevent people from extracting copyright novel text, copyright song lyrics, and other stolen data they’ve claimed they didn’t even use (and in Anthropic’s case, had to walk back in court and change their defence to “uhh… it’s not copyright breech, it’s transformative, bro”).
Anthropic’s defence (per the article) is essentially, “Bro why would you pay for the prompts to jailbreak our AI with a best-of-N attack just to spit out a copy of a copyright novel - its cheaper to just buy the book?”
Not, “hey look, even AIs not trained on that book can spit out that book. Look at these studies: […]”, because that defence is fantasy.