Microsoft bans US police departments from using enterprise AI tool for facial recognition
Image Credits: Fabrice Coffrini / AFP / Getty Images
Microsoft has reaffirmed its ban on U.S. police departments from using generative AI for facial recognition through Azure OpenAI Service, the company’s fully managed, enterprise-focused wrapper around OpenAI tech.
Language added Wednesday to the terms of service for Azure OpenAI Service more obviously prohibits integrations with Azure OpenAI Service from being used “by or for” police departments for facial recognition in the U.S., including integrations with OpenAI’s current — and perhaps future — image-analyzing models.
A separate new bullet point covers “any law enforcement globally,” and explicitly bars the use of “real-time facial recognition technology” on mobile cameras, like body cameras and dashcams, to attempt to identify a person in “uncontrolled, in-the-wild” environments.
The changes in policy come a week after Axon, a maker of tech and weapons products for military and law enforcement, announced a new product that leverages OpenAI’s GPT-4 generative text model to summarize audio from body cameras. Critics were quick to point out the potential pitfalls, like hallucinations (even the best generative AI models today invent facts) and racial biases introduced from the training data (which is especially concerning given that people of color are far more likely to be stopped by police than their white peers).
It’s unclear whether Axon was using GPT-4 via Azure OpenAI Service, and, if so, whether the updated policy was in response to Axon’s product launch. OpenAI had previously restricted the use of its models for facial recognition through its APIs. We’ve reached out to Axon, Microsoft and OpenAI and will update this post if we hear back.
The new terms leave wiggle room for Microsoft.
The complete ban on Azure OpenAI Service usage pertains only to U.S., not international, police. And it doesn’t cover facial recognition performed with stationary cameras in controlled environments, like a back office (although the terms prohibit any use of facial recognition by U.S. police).
That tracks with Microsoft’s and close partner OpenAI’s recent approach to AI-related law enforcement and defense contracts.
In January, reporting by Bloomberg revealed that OpenAI is working with the Pentagon on a number of projects including cybersecurity capabilities — a departure from the startup’s earlier ban on providing its AI to militaries. Elsewhere, Microsoft has pitched using OpenAI’s image generation tool, DALL-E, to help the Department of Defense (DoD) build software to execute military operations, per The Intercept.
Azure OpenAI Service became available in Microsoft’s Azure Government product in February, adding additional compliance and management features geared toward government agencies including law enforcement. In a blog post, Candice Ling, SVP of Microsoft’s government-focused division Microsoft Federal, pledged that Azure OpenAI Service would be “submitted for additional authorization” to the DoD for workloads supporting DoD missions.
Update: After publication, Microsoft said its original change to the terms of service contained an error, and in fact the ban applies only to facial recognition in the U.S. It is not a blanket ban on police departments using the service.