OpenAI, alongside industry leaders including Amazon, Anthropic, Civitai, Google, Meta, Metaphysic, Microsoft, Mistral AI and Stability AI, is committed to implementing robust child safety measures in development, deployment and maintenance of generative AI technologies, as outlined in the principles of security by design. This initiative, led by Spikeda non-profit organization dedicated to defending children against sexual abuse, and All technology is human, an organization dedicated to tackling complex problems in technology and society, aims to mitigate the risks that generative AI poses to children. By adopting comprehensive safety by design principles, OpenAI and our peers ensure that children's safety is a priority at every stage of AI development. To date, we have made significant efforts to minimize the potential of our models to generate content harmful to children, set age restrictions for ChatGPT, and actively collaborate with the National Center for Missing and Exploited Children (NCMEC), the Tech Coalition and other governments. and industry stakeholders on child protection issues and improving reporting mechanisms.
As part of this Safety by Design effort, we are committed to:
-
Develop: Develop, build and train generative AI models that proactively respond to child safety risks.
-
Responsibly source our training datasets, detect and remove child sexual abuse material (CSAM) and child sexual exploitation material (CSEM) from training data, and report any confirmed CSAM to authorities competent.
-
Integrate feedback loops and iterative stress testing strategies into our development process.
- Deploy solutions to combat adversarial abuse.
-
-
Deploy: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections along the way.
-
Combat and respond to abusive content and behavior, and integrate prevention efforts.
- Encourage developers to own security by design.
-
-
Maintain: Maintain model and platform safety by continuing to actively understand and respond to child safety risks.
-
Committed to removing new AIG-CSAMs generated by bad actors from our platform.
- Invest in research and future technology solutions.
- Fight CSAM, AIG-CSAM and CSEM on our platforms.
-
This commitment marks an important step in preventing the misuse of AI technologies to create or distribute child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. As a member of the working group, we also agreed to publish updates on progress each year.