Generative AI

UK data protection watchdog ends privacy probe of Snap’s GenAI chatbot, but warns industry


The U.K.’s data protection watchdog has closed an almost year-long investigation of Snap’s AI chatbot, My AI — saying it’s satisfied the social media firm has addressed concerns about risks to children’s privacy. At the same time, the Information Commissioner’s Office (ICO) issued a general warning to industry to be proactive about assessing risks to people’s rights before bringing generative AI tools to market.

GenAI refers to a flavor of AI that often foregrounds content creation. In Snap’s case, the tech powers a chatbot that can respond to users in a human-like way, such as by sending text messages and snaps, enabling the platform to provide automated interaction.

Snap’s AI chatbot is powered by OpenAI’s ChatGPT, but the social media firm says it applies various safeguards to the application, including guideline programming and age consideration by default, which are intended to prevent kids from seeing age-inappropriate content. It also bakes in parental controls.

“Our investigation into ‘My AI’ should act as a warning shot for industry,” wrote Stephen Almond, the ICO’s exec director of regulatory risk, in a statement Tuesday. “Organisations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market.”

“We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers — including fines — to protect the public from harm,” he added.

Back in October, the ICO sent Snap a preliminary enforcement notice over what it described then as a “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI’”.

That preliminary notice last fall appears to be the only public rebuke for Snap. In theory, the regime can levy fines of up to 4% of a company’s annual turnover in cases of confirmed data breaches.

Announcing the conclusion of its probe Tuesday, the ICO suggested the company took “significant steps to carry out a more thorough review of the risks posed by ‘My AI’”, following its intervention. The ICO also said Snap was able to demonstrate that it had implemented “appropriate mitigations” in response to the concerns raised — without specifying what additional measures (if any) the company has taken (we’ve asked).

More details may be forthcoming when the regulator’s final decision is published in the coming weeks.

“The ICO is satisfied that Snap has now undertaken a risk assessment relating to ‘My AI’ that is compliant with data protection law. The ICO will continue to monitor the rollout of ‘My AI’ and how emerging risks are addressed,” the regulator added.

Reached for a response to the conclusion of the investigation, a spokesperson for Snap sent us a statement — writing: “We’re pleased the ICO has accepted that we put in place appropriate measures to protect our community when using My AI. While we carefully assessed the risks posed by My AI, we accept our assessment could have been more clearly documented and have made changes to our global procedures to reflect the ICO’s constructive feedback. We welcome the ICO’s conclusion that our risk assessment is fully compliant with UK data protection laws and look forward to continuing our constructive partnership.”

Snap declined to specify any mitigations it implemented in response to the ICO’s intervention.

The U.K. regulator has said generative AI remains an enforcement priority. It points developers to guidance it’s produced on AI and data protection rules. It also has a consultation open asking for input on how privacy law should apply to the development and use of generative AI models.

While the U.K. has yet to introduce formal legislation for AI, because the government has opted to rely on regulators like the ICO determining how various existing rules apply, European Union lawmakers have just approved a risk-based framework for AI — that’s set to apply in the coming months and years — which includes transparency obligations for AI chatbots.



Source

Related Articles

Back to top button