A troubling trend is emerging across the United States: teenagers are using artificial intelligence tools to create fake nude images of their classmates. According to a report by the New York Post, experts are sounding the alarm, calling it a disturbing new form of bullying that could leave lasting psychological scars.
The rapid advancement of AI-driven image generation technologies has made it increasingly easy for anyone to manipulate photos with minimal technical skills. Teenagers are now using these tools to produce highly realistic fake intimate images without the knowledge or consent of their peers.
Experts emphasize that this behavior constitutes a severe invasion of privacy and a digital extension of sexual harassment. Dr. Sameer Hinduja, co-director of the Cyberbullying Research Center, described the phenomenon as a “modern-day form of exploitation,” warning that it can have devastating effects on victims’ mental health, including anxiety, depression, and trauma.
Legal responses are struggling to keep pace with the technological innovation. While many states have laws against revenge porn and the non-consensual distribution of intimate images, existing statutes often do not specifically address AI-generated fakes. Advocates are pushing for updated legislation that explicitly criminalizes the creation and distribution of synthetic intimate content without consent.
Schools are also grappling with how to respond. Some districts have begun revising their codes of conduct to include punishments for AI-related harassment. Educators stress the importance of digital literacy and responsible technology use, emphasizing that students must understand the real-world consequences of their online behavior.
Parents and guardians are being urged to have open conversations with their children about the ethical use of AI and the serious harm that fake intimate images can cause. Experts also recommend monitoring children’s digital activities and encouraging them to report any suspicious behavior to trusted adults.
Meanwhile, social media platforms are under increased scrutiny for hosting AI tools and allowing such content to spread. Major companies are under pressure to develop better safeguards, detection systems, and reporting mechanisms to prevent the misuse of AI technology in ways that harm minors.
As artificial intelligence becomes more deeply embedded in everyday life, experts warn that society must adapt rapidly to ensure that technological innovation does not outpace ethical standards and legal protections.