The nonprofit organization Common Sense Media, which focuses on children’s safety in the digital environment, has published an analysis of Google’s Gemini AI. Experts concluded that the system poses a “high risk” for teenagers.
According to the report, the “Under 13” and “Teen Experience” versions of Gemini differ little from the adult version — they simply include a few added restrictions. However, the AI can still provide children with inappropriate and unsafe content, ranging from mental health advice to information about sex, drugs, and alcohol.
“Gemini gets the basics right but makes mistakes in the details. For AI to be safe and effective for children, it must be designed with their needs and development in mind, not just presented as a modified version of an adult product,” said Robbie Thorney, Senior Director of AI Programs at Common Sense Media.
Experts also highlighted the risk of teenagers developing the illusion of “friendship” with the AI. Although Google has built safeguards against such scenarios, the report showed that these protections do not always work as intended.
Google disagreed with Common Sense’s conclusions but admitted that certain safety measures had not functioned properly. The company said it continues to improve its filters, consult with experts, and has already introduced additional layers of protection for users under 18.
Earlier, Common Sense Media evaluated other AI services as well: Meta AI and Character.AI were rated “unacceptable,” Perplexity “high risk,” ChatGPT “moderate,” and Claude “minimal.”