Roundtable at 5Rights IGF Panel
We cannot treat children as subjects of AI experimentation - as it increasingly becomes part of their childhood, we must ensure AI governance supports their rights and development needs, incorporating safety from the start.
We are very pleased to see that Dr Jun Zhao joined policymakers, academics, Youth Ambassador and technical experts to discuss the most pressing risks that AI poses to children. This discussion took place as part of the Roundtable on “Regulating AI and Emerging Risks to Children’s Rights”, organised by the 5Rights Foundation at the Internet Governance Forum 2025.
The EU AI act is expected to provide the world’s first comprehensive framework for regulating AI. Its final text is approved by the European Parliament on 13 March 2024 and will be fully applicable in two years. As discussed by several other experts, one of the strengths of the new Act is its explicit attention to children as a vulnerable category and the exploitation of their vulnerabilities are officially forbidden Article 5(1)(b).
Article 5(1)(b) AIA prohibits the commercialization or use of an “AI system that exploits any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in a manner that causes or is reasonably likely to cause that person or another person significant harm.
While this has certainly been an encouraging step towards providing child-specific protection in AI systems, ensuring the effective implementation and enforcement of the vulnerability protections outlined in Article 5 is not without challenge 5rights.
Here we use a few examples to illustrate some of the challenges and a need for effective specifications of vulnerabilities and significance of harm.
Case study1: Children centred AI systems and their support for ethical AI principles
In 2022, we did a systematic review of all the AI systems designed for children from the Human Computer Interaction research community over the last decade, which has a long history of taking a human-centred approach to design, including children.
Oure review showed that education and healthcare were the more dominant areas where AI has been explored, as well as, applications trying to keep children safe online. We looked more closely at what kind of data has been used by these systems to train their algorithms, and were surprised to see a range of very sensitive personal data, such as children’s biometrics data, genetic data; but often without children’s full consent or assent or necessary for the sake of the application.
We then mapped current AI ethics design principles to these systems, and this shows a very sparse mapping between principles and implementations. Fundamental principles like safety or designing for age-appropriateness are not always explicitly considered in these designs.
Case study2: AI safety risks for children
If the lack of consideration of privacy and safety in an experimental context is concerning, its reflection in a real-world context could be damaging.
An early report in 2017 reveals an invasive collection of sensitive data from the cute cuddly toys. It was alarming to see damaging security risks in the supposedly safe place – home. While increasing data protection has been introduced since, so did the range of IoT devices and smart toys for children.
A recent study by ours, as well as many other studies, has shown the prevalence of smart home devices in a child’s life. Children’s data can be collected when the devices are online or offline. As the researchers from this privacy article confessed: ```individual children probably won’t experience negative consequences due to toys creating profiles of them. But nobody really knows that for sure.’''
Similar to the biases that adults would be subject to, children can also be exposed to unfair decisions by AI systems, simply due to their race or social economic status, however, with probably much more lasting effects in critical situations such as criminal decision making.
The rapid advancement of AI technologies is often associated with rapid deployment, ironically not always associated with sufficient safeguarding in the process of design. For example, when chatbot-like technologies were deployed by Snapchat last year, some serious risks were immediately reported, exposing children to inappropriate content and contact.
While risks like privacy, safety, fairness has been more extensively discussed in the existing policies, the exploitative nature of AI algorithms has been less discussed.
Case study3: AI exploitative risks for children
When we began our research in 2016 on children’s data privacy, we began with an experiment of analysing the third-party data tracking behaviours of over 1 million apps we harvested from the Google Playstore at the time. Through both dynamic network traffic analysis and static code analysis, we identified the prevalence of the existence of data tracking taking place behind the scenes while we were using free apps to book a train ticket or borrow a book from the local library.
What is most shocking for us was to see is the amount of data being collected by the cute apps used by children, often very young children, to learn how to write their first alphabets, how to tap a floating ballon (for fine-motor development). This was a direct invasion of children’s best interests by taking advantage of their vulnerabilities.
Our follow-up studies after the enforcement of GDPR showed that the tracking has not immediately subdued at the time, whereas app stores have tightened the possibilities for us to continue the analysis. We continued to ask this question: why are these technologies harvesting children’s data and how can we provide better support for children.
The sharing by various insiders over the last few years has provided firm evidence for the exploitation nature of such datafication and algorithmic behaviours. A new study published earlier this year provided vivid evidence for how much children’s data would mean for this industry: with over 30% of the advertising revenues from large social media platforms stemming from the attention economy of young people.
With the industry trying hard to keep children’s attention on the platform, children’s best interests have not always been the priority. Several recent studies have made similar findings showing that recommendation systems can actively amplify and direct child users to harmful content. For example, children identified with mental health issues could be more likely exposed to posts leading to more mental health issues. Harmful content is prioritised because it’s more attention grabbing, evoking stronger emotions and prolonging the engagement.
These studies have been conducted mainly through simulations as we have no access to the actual API or algorithmic code. When we talked to children directly about their experiences, it also shows a consistent finding, children of all ages are feeling a passive and disrespectful experience during their time online. At the same time, many of them find it very unfair that systems can do this to their data and manipulate their experience. This is not just in our findings but in many other studies.
While such feeling of being exploited and disrespected can be hard to quantify, we must not neglect how these feelings are fundamentally disrespectful of children’s rights in many ways, and how the same algorithmic practices could cause different degrees of harms for children of different developmental stages or vulnerabilities in varying ways.
Conclusion
These case studies highlight the diverse risks and harms posed by AI systems to children of all ages. While the new EU AI Act explicitly integrates children’s rights into its core framework, effective enforcement must address varying degrees of harm — especially when risks are difficult to quantify or fall below the threshold for prohibition. Nevertheless, such risks can still lead to harmful experiences for children. Protecting children’s rights in the digital space must be embedded at every stage of innovation, and we look forward to some exciting new opportunities to shape the guidelines and assist the enforcement with policymakers.