Source: Save The Children
Anna*, 7, poses for portrait as she studies with tablet in Digital Learning Center for teachers in Mykolaiv, Ukraine. Oleksandr Khomenko/Save the Children.
Imagine being a parent in a region just hit by a devastating earthquake. Your home is partially destroyed, and your primary concern is protecting your children amidst the chaos. With limited access to emergency services, you are unsure what immediate steps you can take to ensure their safety and wellbeing. In this critical moment, “Ask Save the Children” could provide you with reliable, real-time information on essential actions to protect your children from imminent harm.
“Ask Save the Children” is a generative Artificial Intelligence (AI) tool intended to help you but also to equip teachers, community leaders, child protection professionals and public officials to respond effectively. Not just in emergencies, like this earthquake scenario, but also for everyday child protection needs. It uses machine learning (a type of AI that allows a system to learn and improve from experience) to analyse vast amounts of child protection resources and generate immediate, context-specific advice for those who need it, whenever they need it.
Unlike general AI platforms like AskBing or ChatGPT, ‘Ask Save the Children’ is uniquely tailored to the field of child protection. It is underpinned by our extensive repository of specialized research and evidence in this area, offering a level of focus and depth not found in more generic language models. Because of this, it has the potential to be an exceptionally relevant and effective resource for child protection professionals and others who need this type of specialized support.
It’s important to note that “Ask Save the Children” is still in the design and testing phase and not yet available for public use. It is a prototype. We are writing this Blog in the spirit of radical transparency to openly discuss the ethical implications as we develop this tool. While we are excited about its potential, we’re taking time to carefully think through the challenges and solutions, particularly as they relate to ethics, safety and accessibility.
Pause for a moment and picture the chaos of that earthquake-stricken area – imagine the uncertainty, the urgent need for guidance. In such a situation, trustworthy advice isn’t just a benefit, it’s essential. Leveraging AI, we can apply our expertise to provide families like yours with the decisive support you need. Yet, as we navigate these new frontiers, we are acutely aware of the ethical considerations that accompany such powerful tools. So, let’s first unpack three pressing concerns of using AI-based solutions in crisis situations that could directly impact you: algorithmic bias, data privacy, and information accuracy.
Algorithmic bias is an inherent risk in any AI system, stemming from the prejudices of its creators or the data it is trained on. Such bias can perpetuate existing inequalities, often prioritizing one group over another, not as a rare glitch, but as a persistent issue that needs vigilant attention and rigorous management from the onset. So, while “Ask Save the Children” will draw from a comprehensive range of child protection articles, evaluations, and reports, we recognize that much of this research is currently Eurocentric in origin – where perspectives, standards, and practices are centred around European experiences and frameworks. This could limit the tool’s effectiveness and inclusivity, particularly in the Global South.
To mitigate this, we are resolved to rigorously scrutinizing and diversifying our data sources and evidence base. We actively seek out and incorporate research from a multitude of perspectives, to ensure our tool reflects a truly global understanding of child protection. In addition, we are involving local technical experts in moderating the responses. In doing so, we put the specific country’s cultural context and needs at the forefront with the intention of bridging the gap between global knowledge and local application.
This is complemented by a robust ethical framework for testing our AI, ensuring safety and protection for all participants, and explicitly avoiding the unethical use of crises as experimental settings for our technology.
A further concern involves more personal circumstances, where you may desperately want information but are scared to ask or be identified. In a situation like this earthquake, where you may be frantically inputting personally identifiable information (PII) such as names, locations, or specific incidents, you would have legitimate concerns about how this data is handled, stored, and who might have access to it.
To allay this, Save the Children adheres to international standards for data protection. We collect PII only when essential, with the user’s explicit consent, and under stringent data protection protocols that include advanced encryption and secure storage mechanisms. Our comprehensive data governance policies help ensure your personal details are protected with the highest level of care, maintaining your privacy while providing you with immediate, critical advice.
Finally, we are aware that no AI system is infallible. While “Ask Save the Children” is being built on a robust knowledge base, there remains the possibility of it generating advice that might be incomplete or not fully applicable to your unique situation. Therefore, we stress that our tool should serve as a supplementary resource, not a replacement for professional services and support. We are committed to continuous improvement and signposting to the appropriate services available in your context and will regularly update the system based on your feedback and emerging research.
The effectiveness of “Ask Save the Children” hinges not just on advanced AI technology but on our active efforts to ensure responsible development and deployment. This means embedding rigorous ethical standards and safeguards at every phase of the project – from collecting diverse and unbiased data to fine-tuning algorithms to designing an inclusive user interface and intensively testing the accuracy of the information provided by our global experts. Given the sensitivities of child protection, this ethical integrity is non-negotiable.
As you reflect on the transformative potential of AI in Child Protection, it’s crucial to remember the ethical stakes. We urge you to become advocates for responsible AI, not just within the realm of child protection but across all sectors and applications. Your voice matters in holding technology developers and policymakers accountable for ensuring AI serves as a force for good. Now is the time to champion an ethical approach to AI, celebrating its advancements while scrutinizing its risks. By remaining engaged with this discourse, you contribute to a future where technology makes life better for children everywhere, without compromising their safety or dignity.