March 05, 2026 ChainGPT

Ray-Ban Smart Glasses Scandal: Nairobi Contractors Reviewed Intimate Footage — Crypto Wake-Up Call

Ray-Ban Smart Glasses Scandal: Nairobi Contractors Reviewed Intimate Footage — Crypto Wake-Up Call
Meta’s Ray-Ban smart glasses are at the center of a new privacy storm after a Swedish investigation found that sensitive first-person footage captured by the devices was reviewed by offshore contractors in Nairobi. What happened - A joint probe by Svenska Dagbladet and Göteborgs-Posten reports that Meta tapped a Nairobi-based data firm to help train AI systems for its Ray-Ban smart glasses. Contractors told reporters they reviewed videos showing people in private and intimate situations — including using the toilet, changing clothes, and sexual activity — as well as footage with visible credit card numbers. - An unnamed source told reporters: “In some videos, you can see someone going to the toilet, or getting undressed. I don’t think they know, because if they knew, they wouldn’t be recording.” - The Nairobi subcontractor was identified in the investigation as Sama, a company that hires workers to manually annotate images, videos and speech for AI training. Workers described strict surveillance in their offices and said they felt unable to refuse or question assignments for fear of losing their jobs. Why this matters - John Davisson, Deputy Director of Enforcement at the Electronic Privacy Information Center, flagged the broader privacy risk: wearers cannot consent on behalf of everyone they encounter in public or intimate places, and training AI models on such footage can expose identifiable faces, voices and other personal data. - Davisson told Decrypt this compounds privacy and data-protection concerns because “you're taking people's personal information and using it to build your own model.” - The UK’s Information Commissioner’s Office told the BBC it will contact Meta to request information on how the company complies with UK data-protection laws, noting that devices processing personal data should be transparent and give users control over their data. Meta’s stance and product details - Meta’s Ray-Ban smart glasses, co-developed with Ray-Ban and launched in 2023, let users record first-person video, query their surroundings, and interact with Meta’s AI assistant. CNBC reported last month that more than 7 million pairs were sold in 2025, up from a combined 2 million units in 2023–2024. - Meta’s AI terms of service disclose that interactions with AIs and certain user content “may be reviewed,” either automatically or by humans, and that footage can be sent to human contractors for labeling and training purposes. Meta maintains it does not access private shared messages but says user content may be used through automated systems, human review, or third-party vendors to improve services and ensure policy compliance. - Meta did not respond to a request for comment by Decrypt. Voices from the ground - Contractors told the Swedish papers they encountered recordings of intimate situations and sensitive financial information, and that they were instructed to label and quality-assure every image. One contractor described explicit sex scenes filmed while someone was wearing the glasses, calling the footage “extremely sensitive.” - Workers also said they were monitored in their workplaces and felt unable to question assignments: “When you see these videos, it feels that way. But since it is a job, you have to do it… If you start asking questions, you are gone.” Why crypto readers should pay attention - This controversy underscores limits of centralized data governance and the risks of mass data collection feeding opaque AI training pipelines. For crypto and Web3 communities focused on privacy-preserving identity, on-chain consent mechanisms, and decentralized data control, the case is a reminder of why alternatives to centralized data harvesting matter and why regulatory scrutiny of data flows is likely to increase. Bottom line A major tech-player’s wearable-camera program has raised fresh concerns about who sees—and who trains AI on—intimate, personally identifiable footage. Regulators are asking questions, workers are speaking out, and the debate over privacy, consent and how AI is trained just got a new, very public flashpoint. Read more AI-generated news on: undefined/news