Elon Musk said Thursday his artificial intelligence startup, xAI, has used technology from OpenAI to train its own AI models, a process commonly called distilling, making the acknowledgment as he was cross-examined in his lawsuit accusing the rival AI firm of abandoning its promise of a nonprofit structure by switching to a profit-driven model.

Photo by Jessica Christian/San Francisco Chronicle via Getty Images
Key Takeaways
- An OpenAI attorney asked Musk if xAI ever “distilled” technology from OpenAI, to which Musk responded, “Generally A.I. companies distill other A.I. companies,” according to The New York Times.
- Distillation involves using outputs from a larger AI model to train a smaller one, and Musk said xAI “partly” used OpenAI’s technology to train its own AI models.
- OpenAI’s terms of service prohibit outputs from being used to train competing AI models.
What Is Ai Distillation?
Distillation uses an AI model to teach a smaller one, allowing the lesser model to operate with efficiency while using less computing power. The method can be used to train the smaller model without high expenses typically attached to creating an AI model from scratch.
What Is So Controversial About Distillation?
At the heart of the controversy around distillation is that one company essentially uses another company’s AI data to train its AI model, without incurring the steep costs of research and development. Training AI models like ChatGPT and Google’s Gemini can cost over $100 million, with development costs expected to increase even further with the creation of more sophisticated AI models. DeepSeek, the Chinese AI startup accused of distilling OpenAI and Anthropic technology, has claimed it cost just $294,000 to train its R1 model. Anthropic has also said distillation poses a risk to national security, saying in a statement earlier this year distilled models can lack safeguards such as ones put in place to stop bad actors from creating bioweapons or carrying out cyber attacks.
Key Background
OpenAI banned accounts over suspected distillation earlier this year, when it accused DeepSeek of using OpenAI technology to train its open-source model that claimed to be cheaper to use and more, or just as, efficient as models from leading AI firms like OpenAI. Anthropic also accused DeepSeek and other Chinese companies like Moonshot AI and MiniMax of “industrial-scale campaigns” to use the abilities of its AI model, Claude, to enhance their own models. Anthropic said its terms of service were violated, identifying an alleged 16 million exchanges the three companies made with Claude through roughly 24,000 fraudulent accounts. When Anthropic made the distillation accusations, Musk fired back in a tweet pointing out Anthropic’s $1.5 billion settlement last year designed to resolve a lawsuit accusing it of using pirated books to train its AI models.
This article was originally published on forbes.com and all figures are in USD.
Want to see more Forbes articles on your feed? Tap here to make Forbes Australia a preferred source on Google.
Look back on the week that was with hand-picked articles from Australia and around the world. Sign up to the Forbes Australia newsletter here or become a member here.