HyperAI
Back to Headlines

Meta AI Chat Reviews by Contractors Expose User Privacy Risks

a day ago

Many users may not realize that their private conversations with Meta’s AI chatbot are being reviewed by human contractors, some of whom have access to personal information such as full names, phone numbers, email addresses, gender, hobbies, and even photos. Business Insider has learned that contractors hired through platforms like Outlier and Alignerr routinely encounter sensitive user data while evaluating AI responses. Four contractors who worked on Meta AI projects said they frequently saw real user conversations containing personally identifiable information (PII). One estimated that PII appeared in 60% to 70% of the thousands of chats they reviewed weekly. Some chats included selfies, explicit content, or intimate details about relationships, mental health, and personal struggles. Users from the U.S., India, and other countries shared these messages, often treating the AI like a confidant or therapist. In some cases, Meta provided contractors with user profiles containing personal details—such as first names, locations, and interests—intended to help train the AI to respond more naturally. These details were drawn from past interactions and social profile activity. Project documents reviewed by Business Insider confirmed that contractors were expected to use this information to assess how well the AI personalized its responses. While Meta claims it limits what contractors can see and has strict policies in place, contractors said they were not always able to reject chats containing PII. In certain projects, such as one called PQPE, they were required to work with unredacted data. In another, Project Omni, contractors were instructed to flag and skip tasks with PII, but many still encountered it regularly. Meta’s spokesperson stated that contractors are trained to handle PII responsibly and that access is restricted to what’s necessary for the task. They also emphasized that contractors undergo cybersecurity and privacy risk assessments. Scale AI, which operates Outlier, said contributors are only allowed to process data as needed and must report any PII they find. Despite these safeguards, Business Insider found that a single chat with explicit content contained enough details—name, city, gender, hobbies—to locate a matching Facebook profile within minutes. One contractor described the experience as so distressing they had to stop working for the day. Experts warn that the lack of consistent data protection across AI platforms poses serious risks. Miranda Bogen of the Center for Democracy and Technology said users should never assume chatbot interactions are private, especially given how emotionally intimate these conversations can be. She noted that automated filters often fail to catch all PII, and human review introduces additional privacy vulnerabilities. Recent incidents show that users are also unaware when their AI chats become public. In June, Business Insider reported that some Meta AI conversations were appearing in a public feed, exposing personal details. Meta later added a warning, but the app still allows users to share chats publicly, making them searchable via Google. Sara Marcucci of the AI + Planetary Justice Alliance said the findings highlight weak enforcement of data minimization and user control. Bogen added that while human oversight is meant to improve safety, it also reveals that current systems are flawed and not fully trusted—even by the companies themselves.

Related Links