Technology

Slack under attack over sneaky AI training policy

On the heels of ongoing issues with how big tech is misappropriating personal and corporate data in the training of AI services, a storm is brewing among Slack users unhappy with how the Salesforce-owned chat platform is moving forward with its AI vision.

The company, like many others, leverages its own user data to train some of its new AI services. But it turns out that if you don’t want Slack to use your data, you need to email the company to opt out.

And the terms of this commitment are hidden in what appears to be an outdated and confusing privacy policy that no one has paid attention to. That was the case with Slack, until an upset person posted about them on a community site that was extremely popular with developers, and then that post went viral… and that’s what happened here.

It all started last night, when a note on Hacker News raised the question of how Slack trains its AI services, via a direct link to its privacy principles – no further comment needed. This post started a longer conversation – and what seemed like news to current Slack users – that Slack enables users in its AI training by default and that you need to email a specific address to unsubscribe.

This Hacker News thread then sparked multiple conversations and questions on other platforms: There is a new product with a generic name called “Slack AI” that allows users to search for answers and summarize conversation threads , among other things, but why isn’t it mentioned even once by name on this privacy principles page in any way, even to clearly state whether the privacy policy applies to it? And why does Slack refer to both “global models” and “AI models”?

Between people who don’t know where Slack applies its AI privacy principles, and people who are surprised and annoyed at the idea of ​​sending emails to unsubscribe – in a company that makes a big deal story by touting that “You control your data” – Slack does it that doesn’t come off well.

The shock may be new, but the conditions are not. According to Internet Archive pages, the terms have been applicable since at least September 2023. (We asked the company to confirm.)

Per the privacy policy, Slack uses customer data specifically to train “global models,” which Slack uses to power channel and emoji recommendations and search results. Slack tells us that its use of data has specific limits.

“Slack has platform-level machine learning models for things like channel and emoji recommendations and search results. We do not build or train these models in a way that they can learn, remember or reproduce any part of customer data,” a company spokesperson told TechCrunch. However, the policy does not appear to address the company’s overall scope and broader plans for training AI models.

In its terms, Slack claims that if customers forgo data training, they will still benefit from the company’s “globally trained AI/ML models.” But again, in this case, it’s unclear why the company uses customer data to power features like emoji recommendations in the first place.

The company also said it does not use customer data to train Slack AI.

“Slack AI is a separately purchased add-on that uses extended language models (LLMs), but does not train those LLMs on customer data. Slack AI uses LLMs hosted directly in Slack’s AWS infrastructure, so customer data remains internal and is not shared with any LLM providers. This ensures that customer data remains under the control of this organization and exclusively for its use,” a spokesperson said.

Some of the confusion will likely be resolved sooner rather than later. In a response to a critical take on Threads from engineer and writer Gergely Orosz, Slack engineer Aaron Maurer admitted that the company needed to update the page to reflect “how these privacy principles play out with Slack HAVE “.

Maurer added that these terms were written back when the company didn’t have Slack AI, and that these rules reflect the company’s work in research and recommendations. It will be worth reviewing the terms of future updates, given the confusion around what Slack is currently doing with its AI.

Slack’s problems are a stark reminder that, in the rapidly evolving world of AI development, user privacy should not be an afterthought and a company’s terms of service should clearly spell out how and when data is used or if it is not.

Do you have a current tip? Contact Ingrid securely on Signal via ingrid.101 or here. (No PR pitches, please.)

We are launching a newsletter on AI! Register here to start receiving it in your inboxes on June 5.

News Source : techcrunch.com
Gn tech

Back to top button