Call for Industry Responsibilities in Ethical Design and Upholding Child Rights in the Context of AI

In this post, we share the speech Dr Zhao gave at the Conference on Child Rights and the Digital Environment, Including AI, co-hosted by the UN Committee on the Rights of the Child (CRC) and Grigol Robakidze University . Held from March 10-12, 2025, in Tbilisi, Georgia, this event marked the 35th anniversary of the Convention on the Rights of the Child. It was a unique platform to address the challenges and opportunities digital technologies and AI present for children’s rights.
Dr Zhao was invited to speak at the panel on “Industry responsibilities in the context of AI and Child Rights”, together with representatives from industry and academia.
Context: Why this matters
Artificial Intelligence (AI) systems are rapidly changing the world and affecting our children, who are regularly interacting with AI technologies in many different ways: embedded in the connected toys, smart home IoT technologies, apps, and services they interact with on a daily basis. Such AI systems provide children with many exciting opportunities, such as enjoyment and convenience from connected devices, personalised education and learning from intelligent tutoring systems or online content monitoring and filtering by algorithms that proactively identify potentially harmful content or contexts. However, despite its enormous potential, AI presents challenges for children, including biases affecting vulnerable sub-groups, unforeseen [negative consequences, and looming privacy risks from extensive data collection practices.
Over recent years, significant efforts have been made to regulate ethical AI. While there is growing consensus about what the principles require, in general, engagement on children’s issues is still largely lacking and limited. The UNCRC General comment No. 25 (2021) on children’s rights in relation to the digital environment is an exception, which presents a landmark protection for children’s rights in the digital context. Indeed, designing for children’s rights to non-discrimination, their best interests, rights to flourishment, development and respect are expected to be the core principles of all technologies designed for children.
However, our 2022 review revealed that existing AI systems for children give limited attention to these principles. Of the more than 180 systems we analysed, fewer than 5% explicitly addressed children’s developmental needs or non-discrimination. While many innovators reported significant challenges in translating ethical design principles into practice, we must not underestimate the importance of getting these principles right, as failing to do so could have a detrimental impact on children.
Landscape of datafied childhood
Our journey of advocating for ethical designs for children begins with our analysis of over one million mobile apps in the Google Play Store in 2017. Our findings reveal that implicit data tracking is a pervasive practice, affecting over 90% of the apps we analysed. More importantly, we also discovered the extensive harvesting and exploitation of children’s data, often while they were interacting with engaging characters to learn to write their first alphabet or develop motor skills. More than 28% of the family apps we analysed were sending children’s data to over 10 companies, typically for digital marketing purposes.
Over the last eight years, we have seen exciting progress in understanding the direct violation of children’s digital rights. The Disrupted Childhood Report by the 5Rights Foundation is one of the first comprehensive reports to provide evidence from psychology and children’s developmental research. It demonstrates how data about children can be analysed and used to influence their interests in games, prolong screen time, and manipulate their social engagement with ‘friends’ online —- amplifying social anxiety and exacerbating mental health issues.
Indeed, a recent research article published in early 2024 further confirms how children’s data and the exploitation of their attention contribute significantly to the advertising revenue of leading social media companies. The research shows that annual advertising revenue from youth users aged 0-17 is nearly $11 billion. An estimated 30–40% of the advertising revenue generated by three major social media platforms (Snapchat, TikTok, and YouTube) can be attributed to young people. While we see increased protection of children’s online data privacy and digital rights, there has been less progress in addressing the continued exploitation of their data and a genuine change in industry practices.
Landscape of algorithmic childhood
It is important to recognize the growing evidence that children’s data is not only being exploited for commercial gain but is also often linked to irresponsible algorithmic design choices. Many of these design practices are deployed on large platforms routinely accessed by children, despite the lack of integration of ‘designing for children’s best interests’ into the core of their design practices.
One crucial example of such algorithms is recommendation algorithms, which are often evaluated based on users’ satisfaction with the results. As a result, ranking search results to promote user engagement and satisfaction has become a dominant factor in creating ‘successful’ algorithms. However, through the process of (over-)personalisation to users’ needs and expectations, these algorithms can quickly become problematic by exposing users to so-called ’echo chambers’, affecting what they see or what is promoted to them. This practice becomes particularly concerning when applied to children.
Due to the amount of personal information accessible to large platforms, they can often make fairly accurate estimates or profiles of their users. Research has shown that children of certain ethnicities or genders can be unfairly targeted with online content tailored to these personal traits. Furthermore, when personalised promotion is applied to children already struggling with mental health issues, platforms have been found to repeatedly promote such content without considering the consequences. These irresponsible algorithmic exploitation of children data, their innocence and vulnerabilities require immediate interventions and changes.
Children’s demand for data autonomy
At the same time, our research with children has shown a strong theme of data activism emerged from children’s discussion. These “young data activists” are demanding actions to be taken. Children demand for more fundamental changes to be taken and more fundamental autonomy to be re-gained.
In our study, many children talked about how the datafication phenomena and associated consequences should be made aware by the public. They felt currently such practices were largely unknown by the general public, and they talked about how “social movement” (age 7) and “campaigns on social media” (age 8) should be brought in. Apart from relying on the public efforts, some children also talked about how they want new regulations to be made for protecting them against these datafication practices online, such as “an upgraded version of GDPR” (age 13). Finally, a large proportion of the children also demonstrated a strong awareness that data is online platforms’ main source of money. However, some children began to question why platforms have the rights to make money from their data in the first place: “We should be the ones getting paid as it’s our data”. (age 12).
While, these are observations from a small sample of children from the UK, they suggest that existing support is insufficient and children’s demand for fundamental changes and their autonomy.
Immediate change on industrial practices
While there is an increasing urgency to focus on child-centred AI, our landscape highlights the pressing need for broader changes in innovation practices. This includes bridging the gap between ethical AI principles and their real-world application in child-centred contexts. We call for platforms to prioritise children’s best interests and rights in their innovations and designs, invest in ethical designs, promote public awareness, and listen to children’s voices.
The latter is especially critical, as many technology practitioners cite a lack of public awareness and recognition of ethical design as significant barriers to adopting child-centred practices. To support this shift, we are launching a global initiative for ethical AI design for children, which will gather practical case studies by collaborating closely with industry innovators. We aim to create a community for knowledge sharing and translation—from principles to practice—and work toward a child-centred digital future.
Conclusin remark
As a result of this conference, we are hoping to see the publication of a “Joint Statement on Artificial Intelligence and the Rights of the Child” to be published by 2025.
If you have any questions or feedback, please reach out to us via @oxfordccai or oxfordccai at cs dot ox dot ac dot uk.