The British feminist writer Angela Carter wrote that “Comedy is tragedy that happens to other people”.
And right now, quite a few people are chuckling at a tragedy that’s befallen OpenAI amid claims that Chinese artificial intelligence startup DeepSeek “stole” the US startup’s data to train its large language model (LLM), R1.
A quick recap if you’re not up to speed on the story of the week.
DeepSeek supposedly achieved similar results training up its model to OpenAi’s ChatGPT for around 6% of the cost of its US competitor. The news wiped more than US$1 trillion in value from the AI chipmaker NVIDIA’s market cap, along with sending several tech stocks south, and suddenly left US tech gods looking like they had feet of clay.
But the idea that DeepSeek is benefitting from the work of others is the lovechild of karma and irony, because OpenAI founder Sam Altman has built a US$157 billion AI empire doing exactly that.
Venture capitalist David Sacks, the new Trump White House artificial intelligence czar, claimed there’s “substantial evidence” DeepSeek “distilled the knowledge out of OpenAI’s models”.
Distillation, he explained, is like a parent teaching a kid, passing on their knowledge, with one AI model learning from the other by asking millions of questions to mimic that wisdom.
AI is always standing on the shoulders of giants. But lacks the humility to admit it. (It’s also worth noting that Sacks has invested in Elon Musk’s xAI).
OpenAI spokeswoman Liz Bourgeois was quoted in The New York Times saying: “We know that groups in the [China] are actively working to use methods, including what’s known as distillation, to replicate advanced US AI models. We are aware of and reviewing indications that DeepSeek may have inappropriately distilled our models, and will share information as we know more.”
Hoist with his own petard
So why the comedy? Well Sam Altman has been hoist with his own petard, as a generative AI trained on Shakespeare would say.
OpenAI is engaged in copyright fights around the world. The New York Times, worth around 6% of the AI business, is among them, suing the AI titan and claiming last year that OpenAI was deleting the evidence.
OpenAI’s submission to dismiss the NYT case 12 months ago goes so far as to accuse the newspaper of hacking them.
“There is a genuinely important issue at the heart of this lawsuit—critical not just to OpenAI, but also to countless start-ups and other companies innovating in this space—that is being litigated both here and in over a dozen other cases around the country (including in this Court): whether it is fair use under copyright law to use publicly accessible content to train generative AI models to learn about language, grammar, and syntax, and to understand the facts that constitute humans’ collective knowledge,” the submission says.
“For good reason, there is a long history of precedent holding that it is perfectly lawful to use copyrighted content as part of a technological process that (as here) results in the creation of new, different, and innovative products.”
But OpenAI has been arguing that it needs access to the intellectual property (IP) – ie. copyrighted work – for years, for free. Central to its US District Court case submission is the notion that using copyrighted material to train LLMs is protected by fair use – and neither consent nor recompense is required.
Other AI companies, from Meta to Anthropic have gone down a similar path.
The risk is in the fine print of offerings from the likes of Canva’s Leonardo.ai, that say they will provide indemnity to users should any issues arise.
OpenAI’s defence against what happened is that it’s against their rules for other AI companies to copy what they’re doing.
A pickpocket robbed
Understory founder Ben Liebmann, a vocal critic of generative AI’s exploitation of the creative sector, was droll on LinkedIn.
“That must be devastating—almost as devastating as, I don’t know, OpenAI and their peers taking the work of artists, songwriters and musicians, authors and journalists, designers, and television producers and filmmakers, without consent or compensation, to train AI platforms that now seek to replace the very people who created them,” he wrote.

Ben ‘Liebmeme’ has some fun.
“But sure, do tell us more about your sudden moral concerns about the concept of consent and the importance of intellectual property rights. Let’s catch up when you’re back from Damascus.”
Now as many foreign companies that have ventured into China have learnt to their peril, it’s not always the greatest respecter of Western IP law, but in this instance, it feels a little like trying to summon up empathy for a pickpocket complaining they’ve been robbed.
As Ben Shepherd pointed out in his LinkedIn column OpenAI told the UK parliament “that its content-generating ChatGPT product would be impossible to create without the company’s use of human-created copyrighted work for free”.
Noting the “rules for thee and not for me” vibe, Shepherd wrote:
“OpenAI is super cool on content stealing until it costs them money. Then it’s serious business and a matter for national security that demands government intervention.
“But they aren’t that keen on government intervention on (sic) other areas as it’s overbearing and not cool when it comes to tech. So, it’s an agile approach to using the government.”
One of the greatest ironies in tech is how much everyone loves to talk about disruption. Until it comes for them too. Then guess who’s screaming the loudest.
It’s commedia dell’arte.
Thanks to Amrit Rupa for this on the new “sharing economy” – I’m training my jokes on this.

Source: Amrit Rupa
Trending
Daily startup news and insights, delivered to your inbox.