< Back to 68k.news IE front page

Slack has been siphoning user data to train AI models without asking permission

Original source (on modern site) | Article images: [1]

Serving tech enthusiasts for over 25 years.

TechSpot means tech analysis and advice you can trust.

Facepalm: For organizations, the specter of internal data being used to train AI models raises serious concerns around security and compliance. But Slack has still apparently been slurping up messages, files, and data to train its AI features behind the scenes. Even worse, users were automatically opted into this arrangement without knowledge or consent.

The revelation, which blew up online this week after a user called it out on X/Twitter, has plenty of people peeved that Slack didn't make this clearer from the jump. Corey Quinn, an executive at Duckbill Group, kicked up the fuss with an angry post asking "I'm sorry Slack, you're doing f**king WHAT with user DMs, messages, files, etc?"

Quinn was referring to an excerpt from Slack's Privacy Principles that reads, "To develop AI/ML models, our systems analyze Customer Data (e.g. messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement."

Slack was quick to respond under the same post, confirming that it's indeed using customer content to train certain AI tools in the app. But it drew a line - that data isn't going towards their premium AI offering, which they bill as completely isolated from user information.

Hello from Slack! To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the…

- Slack (@SlackHQ) May 17, 2024

Still, most were caught off guard by Slack's main AI features relying on an open tap into everyone's private conversations and files. Several users argued there should've been prominent heads-up, letting people opt out before any data collection commenced.

The opt-out process itself is also a hassle. Individuals can't opt out on their own; they need an admin for the whole organization to request it by emailing a very specific subject line, which you can find in the post above.

Some heavy hitters weighed in, piling on the criticism. Meredith Whittaker, president of the private Signal messaging app, threw some shade, saying "we don't collect your data in the first place, so we don't have anything to 'mine' for 'AI'." Ouch.

The backlash highlights rising tensions around AI and privacy as companies rush to one-up each other in developing smarter software.

Inconsistencies in Slack's policies aren't helping, either. One section says the company can't access underlying content when developing AI models. Another page marketing Slack's premium generative AI tools reads, "Work without worry. Your data is your data. We don't use it to train Slack AI. Everything runs on Slack's secure infrastructure, meeting the same compliance standards as Slack itself."

However, the user data-mining admission within the "privacy principles" seems to contradict these statements.

Over on Threads, a Slack engineer has tried to clear things up, saying the privacy rules were "originally written about the search/recommendation work we've been doing for years prior to Slack AI," admitting that they do need an update.

Still, the bigger issue is obviously the opt-in-by-default approach. While common in tech, it runs counter to data privacy principles of giving people explicit choice over how their info gets used.

< Back to 68k.news IE front page