Generative AI

Microsoft Joins Thorn and All Tech Is Human to enact strong child safety commitments for generative AI


While millions of people use AI to supercharge their productivity and expression, there is the risk that these technologies are abused. Building on our longstanding commitment to online safety, Microsoft has joined Thorn, All Tech is Human, and other leading companies in their effort to prevent the misuse of generative AI technologies to perpetrate, proliferate, and further sexual harms against children. Today, Microsoft is committing to implementing preventative and proactive principles into our generative AI technologies and products.

This initiative, led by Thorn, a nonprofit dedicated to defending children from sexual abuse, and All Tech Is Human, an organization dedicated to collectively tackling tech and society’s complex problems, aims to mitigate the risks generative AI poses to children. The principles also align to and build upon Microsoft’s approach to addressing abusive AI-generated content. That includes the need for a strong safety architecture grounded in safety by design, to safeguard our services from abusive content and conduct, and for robust collaboration across industry and with governments and civil society. We have a longstanding commitment to combating child sexual exploitation and abuse, including through critical and longstanding partnerships such as the National Center for Missing and Exploited Childrenthe Internet Watch Foundationthe Tech Coalition, and the WeProtect Global Alliance. We also provide support to INHOPE, recognizing the need for international efforts to support reporting. These principles will support us as we take forward our comprehensive approach.

As a part of this Safety by Design effort, Microsoft commits to take action on these principles and transparently share progress regularly. Full details on the commitments can be found on Thorn’s website here and below, but in summary, we will:

  • DEVELOP: Develop, build and train generative AI models to proactively address child safety risks
  • DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process.
  • MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks

Today’s commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. This collective action underscores the tech industry’s approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society.

We will also continue to engage with policymakers on the legal and policy conditions to help support safety and innovation. This includes building a shared understanding of the AI tech stack and the application of existing laws, as well as on ways to modernize law to ensure companies have the appropriate legal frameworks to support red-teaming efforts and the development of tools to help detect potential CSAM.

We look forward to partnering across industry, civil society, and governments to take forward these commitments and advance safety across different elements of the AI tech stack. Information-sharing on emerging best practices will be critical, including through work led by the new AI Safety Institute and elsewhere.

Our full commitment

DEVELOP: Develop, build, and train generative AI models that proactively address child safety risks

  • Responsibly source our training datasets, and safeguard them from child sexual abuse material (CSAM) and child sexual exploitation material (CSEM): This is essential to helping prevent generative models from producing AI generated child sexual abuse material (AIG-CSAM) and CSEM. The presence of CSAM and CSEM in training datasets for generative models is one avenue in which these models are able to reproduce this type of abusive content. For some models, their compositional generalization capabilities further allow them to combine concepts (e.g. adult sexual content and non-sexual depictions of children) to then produce AIG-CSAM. We are committed to avoiding or mitigating training data with a known risk of containing CSAM and CSEM. We are committed to detecting and removing CSAM and CSEM from our training data, and reporting any confirmed CSAM to the relevant authorities. We are committed to addressing the risk of creating AIG-CSAM that is posed by having depictions of children alongside adult sexual content in our video, images and audio generation training datasets.
  • Incorporate feedback loops and iterative stress-testing strategies in our development process: Continuous learning and testing to understand a model’s capabilities to produce abusive content is key in effectively combating the adversarial misuse of these models downstream. If we don’t stress test our models for these capabilities, bad actors will do so regardless. We are committed to conducting structured, scalable and consistent stress testing of our models throughout the development process for their capability to produce AIG-CSAM and CSEM within the bounds of law, and integrating these findings back into model training and development to improve safety assurance for our generative AI products and systems.
  • Employ content provenance with adversarial misuse in mind: Bad actors use generative AI to create AIG-CSAM. This content is photorealistic, and can be produced at scale. Victim identification is already a needle in the haystack problem for law enforcement: sifting through huge amounts of content to find the child in active harm’s way. The expanding prevalence of AIG-CSAM is growing that haystack even further. Content provenance solutions that can be used to reliably discern whether content is AI-generated will be crucial to effectively respond to AIG-CSAM. We are committed to developing state of the art media provenance or detection solutions for our tools that generate images and videos. We are committed to deploying solutions to address adversarial misuse, such as considering incorporating watermarking or other techniques that embed signals imperceptibly in the content as part of the image and video generation process, as technically feasible.

DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process

  • Safeguard our generative AI products and services from abusive content and conduct: Our generative AI products and services empower our users to create and explore new horizons. These same users deserve to have that space of creation be free from fraud and abuse. We are committed to combating and responding to abusive content (CSAM, AIG-CSAM, and CSEM) throughout our generative AI systems, and incorporating prevention efforts. Our users’ voices are key, and we are committed to incorporating user reporting or feedback options to empower these users to build freely on our platforms.
  • Responsibly host models: As our models continue to achieve new capabilities and creative heights, a wide variety of deployment mechanisms manifests both opportunity and risk. Safety by design must encompass not just how our model is trained, but how our model is hosted. We are committed to responsible hosting of our first-party generative models, assessing them e.g. via red teaming or phased deployment for their potential to generate AIG-CSAM and CSEM, and implementing mitigations before hosting. We are also committed to responsibly hosting third-party models in a way that minimizes the hosting of models that generate AIG-CSAM. We will ensure we have clear rules and policies around the prohibition of models that generate child safety violative content.
  • Encourage developer ownership in safety by design: Developer creativity is the lifeblood of progress. This progress must come paired with a culture of ownership and responsibility. We encourage developer ownership in safety by design. We will endeavor to provide information about our models, including a child safety section detailing steps taken to avoid the downstream misuse of the model to further sexual harms against children. We are committed to supporting the developer ecosystem in their efforts to address child safety risks.

MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks

  • Prevent our services from scaling access to harmful tools: Bad actors have built models specifically to produce AIG-CSAM, in some cases targeting specific children to produce AIG-CSAM depicting their likeness. They also have built services that are used to “nudify” content of children, creating new AIG-CSAM. This is a severe violation of children’s rights. We are committed to removing from our platforms and search results these models and services.
  • Invest in research and future technology solutions: Combating child sexual abuse online is an ever-evolving threat, as bad actors adopt new technologies in their efforts. Effectively combating the misuse of generative AI to further child sexual abuse will require continued research to stay up to date with new harm vectors and threats. For example, new technology to protect user content from AI manipulation will be important to protecting children from online sexual abuse and exploitation. We are committed to investing in relevant research and technology development to address the use of generative AI for online child sexual abuse and exploitation. We will continuously seek to understand how our platforms, products and models are potentially being abused by bad actors. We are committed to maintaining the quality of our mitigations to meet and overcome the new avenues of misuse that may materialize.
  • Fight CSAM, AIG-CSAM and CSEM on our platforms: We are committed to fighting CSAM online and preventing our platforms from being used to create, store, solicit or distribute this material. As new threat vectors emerge, we are committed to meeting this moment. We are committed to detecting and removing child safety violative content on our platforms. We are committed to disallowing and combating CSAM, AIG-CSAM and CSEM on our platforms, and combating fraudulent uses of generative AI to sexually harm children.

Tags: child online protection, deepfakes, generative ai, Online Safety, Thorn



Source

Related Articles

Back to top button