In a letter sent on April 1 to Google CEO Sundar Pichai and YouTube CEO Neal Mohan, the group raised concerns about the growing volume of low-quality videos produced by AI tools but labeled as educational, according to Bloomberg.
The signatories criticized what they described as a surge in content creators using AI to mass-produce videos aimed at children - one of the most impressionable and vulnerable audiences online - primarily for profit.
They warned that such content, often referred to as “AI junk,” lacks substance and may negatively affect children’s cognitive development. According to the letter, exposure to this type of content could shorten attention spans and blur the line between reality and fiction.
Advocates also expressed concern that increasing screen time driven by such videos is replacing real-world activities that are essential for children’s emotional and social development.
“There is still much we do not understand about the effects of AI-generated content on children,” the group wrote.
Among the signatories is social psychologist Jonathan Haidt, author of The Anxious Generation, who has been at the forefront of global efforts to address the impact of social media and smartphones on young people.
Child advocacy organizations such as Fairplay and the National Alliance for Youth Health, along with the American Federation of Teachers and several schools, also signed the letter.
Responding to Bloomberg, YouTube spokesperson Boot Bullwinkle said the platform maintains “high standards” for content on YouTube Kids, including limiting AI-generated videos to a small group of vetted, high-quality channels.
He added that parents have tools to block specific channels and that the company prioritizes transparency by labeling AI-generated content. Creators are also required to disclose when videos contain highly realistic AI elements.
However, the advocates argue that such measures may not be sufficient, particularly because young children may not be able to read or understand content labels.
AI-generated videos have become increasingly common on YouTube, especially in content aimed at toddlers and young children. For many creators, automation significantly reduces production costs and effort. Some have even begun sharing tutorials on how to build businesses around producing videos for infants using AI tools.
Bullwinkle countered that mass-producing low-quality content is not a sustainable strategy on YouTube, as the platform’s monetization systems are designed to penalize such practices.
Earlier this year, Mohan stated that tackling “AI junk” and ensuring YouTube remains a safe and enjoyable space is a top priority for 2026. At the same time, the platform maintains that not all AI-generated content is harmful and that, when used responsibly, AI can enable meaningful creativity.
The debate comes amid broader scrutiny of major tech platforms. In late March, a US court ruled that Google and Meta could be held accountable for harm caused to a young user by products deemed “addictive.”
While both companies have said they will appeal, the ruling has intensified pressure from consumer advocates and lawmakers, who are calling for changes to platform design - including how recommendation algorithms operate.
Du Lam