Generative AI

Slack updates AI ‘privacy principles’ after user backlash – Computerworld


Slack has updated its “privacy principles” in response to concerns about the use of customer data to train its generative AI (genAI) models. 

The company said in a blog post Friday that it does not rely on user data — such as Slack messages and files — to develop the large language models (LLMs) powering the genAI features in its collaboration app. But customers still need to opt out of the default use of their data for its machine learning-based recommendations.

Criticism of Slack’s privacy stance apparently started last week, when a Slack user posted on X about the company’s privacy principles, highlighting the use of customer data in its AI models and requirement to opt out. Others expressed outrage on a HacknerNews thread

On Friday, Slack responded to the frustrations with an update to some of the language of its privacy principles, attempting to differentiate between its machine learning and LLMs. 

Slack uses machine learning techniques for certain features such as emoji and channel recommendations, as well as in search results. While these ML algorithms are indeed trained on user data, they are not built to “learn, memorize, or be able to reproduce any customer data of any kind,” Slack said. These ML models use “de-identified, aggregate data and do not access message content in DMs, private channels, or public channels.”

No customer data is used to train the third-party LLMs used in its Slack AI tools, the company said.

Slack noted the user concerns and acknowledged that the previous wording of its privacy principles contributed to the situation.  “We value the feedback, and as we looked at the language on our website, we realized that they were right,” Slack said in a blog post Friday. “We could have done a better job of explaining our approach, especially regarding the differences in how data is used for traditional machine-learning (ML) models and in generative AI.”  

“Slack’s privacy principles should help it address concerns that could potentially stall adoption of genAI initiatives,” said Raúl Castañón, senior research analyst at 451 Research, part of S&P Global Market Intelligence.


However, Slack continues to opt customers in by default when it comes to sharing user data with the AI/ML algorithms. To opt out, the Slack admin at a customer organization must email the company to request their data is no longer accessed. 

Castañón said Slack’s stance is unlikely to allay concerns around data privacy as businesses begin to deploy genAI tools. “In a similar way as with consumer privacy issues, while an opt-in approach is considerably less likely to get a response, it typically conveys more trustworthiness,” he said.

A recent survey by analyst firm Metrigy showed that the use of customer data to train AI models is the norm: 73% of organizations polled are training or plan to train AI models on customer data.

“Ideally, training would be opt-in, not opt-out, and companies like Slack/Salesforce would proactively inform customers of the specifics of what data is being used and how it is being used,” said Irwin Lazar, president and principal analyst at Metrigy.  “I think that privacy concerns related to AI training are only going to grow and companies are increasingly going to face backlash if they don’t clearly communicate data use and training methods.”



Source

Related Articles

Back to top button