Meta Ray-Ban AI Glasses Under Fire: Privacy Lawsuit and UK Probe Expose Hidden Data Review

A U.S. class action and UK regulatory investigation allege that contractors in Kenya reviewed intimate footage captured by Meta’s AI-powered smart glasses, raising serious questions about privacy claims in wearable AI devices.
Published

2026-03-09 10:15

Meta’s Ray-Ban AI glasses are facing mounting legal and regulatory pressure after investigations revealed that contractors reviewing AI training data may have accessed intimate footage captured by users’ devices. The revelations have sparked a U.S. class action lawsuit and a formal investigation by the UK’s Information Commissioner’s Office (ICO).

What Happened

Swedish newspapers Svenska Dagbladet and Göteborgs-Posten published an investigation revealing that contractors working in Nairobi, Kenya, may have viewed highly personal footage captured by Meta’s AI-powered smart glasses. According to the report, some videos showed bathroom visits, sexual activity, and other intimate moments that users never intended to share.

The contractors are AI annotators—workers who review images, video, and audio to help train artificial intelligence systems. Meta’s smart glasses include an AI assistant that answers questions about what the wearer is seeing. To make those answers accurate, the system relies on training data that human reviewers analyze.

Regulatory Scrutiny

The UK’s Information Commissioner’s Office has written to Meta following the Swedish reporting, calling the investigation findings “concerning.” The ICO will request information from Meta about how user data is processed and who has access to it.

Meta’s Defense

Meta maintains that media captured by its smart glasses stays on the user’s device unless the user chooses to share it. A spokesperson stated:

“Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device. When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”

The company also noted that Ray-Ban Meta glasses include an LED indicator light that activates whenever photos or videos are recorded, signaling to people nearby that content is being captured.

Growing Scale, Growing Risk

The controversy arrives as Meta’s smart glasses have achieved remarkable commercial success. The company reportedly sold more than 7 million pairs in 2025—a dramatic increase from previous years. This scale means millions of users may have been affected by the data review practices.

At the same time, Meta has expanded the capabilities of its AI glasses and updated its privacy policies. One change keeps the AI camera features active unless users turn off the “Hey Meta” voice command. Another removes the ability to opt out of storing voice recordings in the cloud.

What This Means for Wearable AI

This incident highlights a broader reality for consumers: AI devices often collect more information than people realize. When users share content with AI systems, human reviewers may analyze that material to help improve the technology. Even when companies use tools to blur faces or hide identifying details, those systems do not always work perfectly.

For buyers, the incident raises important questions about the trade-off between convenience and privacy. As regulatory scrutiny intensifies, manufacturers may need to adopt stronger on-device processing or stricter opt-in requirements to avoid contractor review.

Sources: [Fox Newshttps://www.foxnews.com/tech/meta-smart-glasses-privacy-concerns-grow){rel=“nofollow”}, [Glass Almanachttps://glassalmanac.com/ray%E2%80%91ban-meta-glasses-draw-u-k-probe-7m-sold-by-2025-why-it-matters-now/){rel=“nofollow”}, [TechCrunchhttps://techcrunch.com/2026/03/05/meta-sued-over-ai-smartglasses-privacy-concerns-after-workers-reviewed-nudity-sex-and-other-footage/){rel=“nofollow”}