In January 2026, a simple Gmail login exposed 50,000 children’s private conversations with an AI toy—names, birthdates, and family details—all stored in plaintext on a web console.
Security researchers Joseph Thacker and Joel Margolis discovered this vulnerability in a popular AI toy’s web console, which allowed unauthenticated access via Gmail logins.
The exposed data included chat transcripts containing sensitive information collected to train the toy’s language models. This incident highlights the first of two critical risks: technical vulnerabilities like poor access controls and unsecured data storage.
Consumer advocacy groups warn that AI toys may erode trust in human relationships and hinder social skill development in young children.
The constant presence of algorithmic companions risks blurring the line between real and artificial interactions, potentially encouraging obsessive behavior or unrealistic expectations about communication.
Parents are advised to avoid AI toys for children under five due to these combined risks. Alternatives include local-processing toys with no cloud connectivity, strict account security measures, and devices kept out of private spaces like bedrooms.
While companies market these toys as educational tools, the exposed chat logs and developmental concerns reveal a stark contrast between corporate promises and real-world consequences.
Source: Wired | Proton | Commonsensemedia