Chinese AI development company DeepSeek has found itself at the center of controversy after the release of its latest language model, R1-0528. Researchers in the artificial intelligence community have raised concerns that the model may have been trained using responses generated by Google’s Gemini model.
The claim was first made by developer Sam Peach from Melbourne, who reported that DeepSeek R1-0528 produces outputs nearly identical to those of Gemini 2.5 Pro — a model accessible only through verified accounts. According to Peach, this suggests DeepSeek may have used Gemini’s outputs in a process known as ‘distillation,’ where one model is trained on the responses of a more powerful one.
While distillation is not inherently illegal, using content generated by closed models without permission can violate terms of use and raise serious ethical concerns. This is not the first time DeepSeek has faced such accusations — earlier models were also suspected of being trained using responses from OpenAI’s ChatGPT.
In a competitive landscape, such incidents have become increasingly sensitive. Companies like Google and OpenAI are tightening security to prevent outside models from learning from their outputs. Google, for example, requires identity verification for Gemini access, and frequently modifies response styles to make unauthorized distillation more difficult.
DeepSeek has not issued an official response to the allegations. However, the situation has already sparked intense discussion in the tech community and underscored the urgent need for global standards and transparent rules governing AI development.

DeepSeek Suspected of Using Google Gemini in AI Training
Popular Categories