Speaking about child-centred AI ethics at Oxford Reuben College

On Friday 10 May 2024, Dr Zhao spoke at the launching event of Oxford Reuben College’s Global Challenges Programme. This is a new programme designed to investigate some of the most pressing global challenges of our day. In its pilot phase (2024-2026), the programme will focus on the impact of advanced AI systems on human development.

As a result, in the launch event, we focused on the discussions on several challenges and opportunities: from concerns about child online safety, agency, and mental health; opportunities to learn from new research, past ‘mistakes’, and other disciplines; and examples of innovative AI applications supporting language acquisition and creativity.

As an invited speaker, Dr Zhao discussed the crucial issue about why we need to consider ethical AI principles from children’s perspectives.

This was based on our experiencing of working together with children and building AI innovations over the last few years, and our latest publication at Nature Machine Intelligence. We hoped, by sharing these discussions and thoughts, we could hear feedback from the workshop audience and open a continued conversation about what is needed to build AI systems for children.

With no doubt that AI systems are rapidly changing the world and affecting every aspect of our children’s life. Children are already constantly interacting with AI technologies in many ways: Through connected toys, smart home devices, apps as well as many learning platforms at schools that are powered by AI.

With the rapid rise of generative AI technologies, children have again proven to be keen early adopters. The UK Ofcom media use survey, completed a new months after the emergence of generative AI technologies, shows that a quick and wide adoption by UK children and young people. A later survey by Internet Matters in late 2023, shows generative AI is used by a large proportion of children to help with their schoolwork in the UK, whereas only a small portion of their schools have begun talking about these technologies with children. It is crucial to highlight the disparity of adoptions by children from wealthy families v.s those from lower social-economic homes, where a similar finding is also found in America families, showing the potential a rising and continued digital inequality.

While there are extensive challenges associated with AI for our children, AI offers numerous exciting new opportunities: From supporting children’s learning, with adaptive learning technologies; to those designed to support children with specific needs to enhance their handwriting or gaining better support for the social emotional needs. For example, it was exciting to see how ECHO project led by UCL, has enabled educators to notice how children with Autism may interact with digital characters differently from normal children at school. The digital interface, and the semi-autonomous agent character, enabled children to have an opportunity to play freely without being judged and their special needs observed and supported by their educators. Another example shows how robots can be partners, helping children as young as 6-8 years old about handwriting. Instead of constantly correcting children about their handwriting, the robot lets children take the teaching role by correcting robots’ handwriting, boosting children’s confidence, helping them to practice without realising. Furthermore, like many of the most exciting AI-based innovations today, AI has huge potential to provide a better healthcare opportunities for children. For example, AI has been widely explored for achieving a more timely early detection of autism, achieving a much higher level of accuracy than questionnaire-based approach.

However, while being excited by all these potentials, we must be cautious about to what extent these technologies are developed and used with children’s best interests in mind, whether essential ethical principles have been followed through to ensure minimal harm, which can be manifested as invasive collection of sensitive data, not always necessary for the function of the technologies, or respecting children’s voices and opinions of how their data is collected and used; or unfair decisions simply due to a child’s race or social economic status in critical situations such as criminal decision making.

Indeed, many of these concerns are also shared by the children’s commissioner for England, who has cautioned on the risks of AI on child protection, particularly given that the latest UK government white paper on AI, which made no mention to children or child protection, excepting for using AI tools to combat child sexual abuse material (CSAM).

While creating amazing technologies for children, we must consider how to position their best interests into the centre of innovations and regulations. The UNICEF AI for children project is one of the pioneering initiatives began in 2020. Through extensive engagement with global experts and children all over the world, the project recommends 9 fundamental principles for embedding children’s rights in the creation of AI systems.

At a similar time, observing a proliferate range of AI technologies being applied to children, we also became curious about how AI systems have been designed for children in the last decade, in what application areas has AI been used for children, what kind of ethical principles may have/have not been considered by these existing approaches.

We began with a systematic review of research about designing AI systems for children from the HCI research community. After reviewing nearly 200 papers, we found that a considerable proportion of existing research has bene designing AI systems to support children’s learning, preventing them from being harmed, and ensuring efficient medical diagnosis. We thought this is quite a contrast from what we see or hear of children’s everyday experience with AI-based systems, which are dominated by gaming, social media. However, it’s good to see lots of work aiming to foster good in children’s lives.

We then looked at the different types of data used by these ai systems or models. We do anticipate a good range of children’s data to be used, given the nature of these applications. However, it is critical to highlight the range of quite sensitive data being used by these systems, which indicates an urgent need to ensure that these data are used correctly and ethically. This is particularly crucial given that the majority of these work reviewed did not make an explicit discussion about how they support safeguarding children’s data privacy, not even to mention the process of consent.

The HCI community has a long history of designing for children. Our analysis made us realise that there may be a disjoint between the HCI design and AI ethics principles. We want to bring together the user-led and theory-driven approach of HCI design process and fundamental AI ethics principles, so we can create better child-centred AI system. We want to create community to discuss what child-centred AI means. So, in 2023, in one of the premier HCI conference, we organised the first child-centred AI workshop, attended by over 30 participants, including child-centred design experts from all sorts of background. One of the missions of our workshop was to collectively define what we think CCAI should entail. I thought our final definition reflects very well both the aspects of ethical designs and designing for children, considering their developmental needs and vulnerabilities. But how can we create child-centred AI for children?

While there is a plethora of AI ethics principles, very few have considered from children’s perspective. So we took a deep dive of the current ai ethics principles globally, consolidated their terminologies and discussions, and contextualised them against children’s context, to see how they may related to children.

To do this, we drew on the UNCRC’s framework on children’s rights in the digital context to align the five core principles we identified (see details in paper), to spot whether there are any gaps in their current considerations in the context of children. Through this process, we identified some critical gaps in the existing AI ethics principles for children and critical challenges to take these into practice.

There isn’t enough time go through all the gaps today, but please do feel free to find out more about the rest of the discussions in our nature intelligence paper. The important things to highlight here are two things:

  • How each principle may be perceived different by different child-related disciplines, such as education or developmental psychology.
  • How each principle can be related to much nuance if we consider children’s developmental process while still recognise children as human.

For future work, we have a long wish list, while two made to the top of our list:

  • The need to respect the developmental aspect of childhood
  • The need to revisit the role of parental support

Often children are regarded not have the competence for agency or decision making. However, it does not mean that their best interests should not be respected, their voices are not included in the process of innovations. Children are often protected or restricted in our technology designs, instead of being nurtured or equipped with agency to recognise manipulations, misinformation or behaviour engineers. This has the danger of dehumanising children and their rights. Put children, and all children, into the centre of ethics principles is foundational and critical.

Our second points (about parental support) relates to the work we have been doing with our dear colleagues, Professor Hertog and Professor Weinstein. Parents are hardly discussed in any existing principles while they obviously have a critical in supporting and fostering children’s digital experiences. Current neglect of the parent and child relationships in all principles resulted to a situation where a protective and monitor-based approach remains a norm, which remains ineffective and does little to protect children both immediately and in the long term. Instead, we should revisit the role of parents, and define how to design for children’s autonomy with an active participation of parent.

We cannot forget to contextualise all the challenges we have discussed so far in a global context. Our other piece of recent work with colleagues four different continents has shown the wide difference of how children perceive datafication around them, and how these may be affected by the culture or philosophy of their country, but also how the national policy and legislation developments could place some man-made barriers to their equal access to digital autonomy and ethical AI systems. There are still little cross-cultural examinations from children’s perspectives, which would be critical to address this global challenge.