A broad coalition of civil society groups is sounding the alarm over Meta’s plan to embed facial recognition into its popular Ray-Ban smart glasses, warning the feature would create a new frontier for surveillance and abuse. The company’s push comes as investigations reveal a hidden pipeline where sensitive user footage, including scenes of nudity and private conversations, is routinely reviewed by overseas contractors, often without the knowledge of the people being recorded. This convergence of advanced biometric tracking and intimate data handling has triggered urgent calls for regulators to intervene before the technology becomes ubiquitous.
Last week, 64 consumer advocacy groups sent a letter to Meta, Ray-Ban parent company EssilorLuxottica, the White House, the Federal Trade Commission, and the Department of Justice demanding a halt to the facial recognition rollout. The coalition, led by the Consumer Federation of America and Ultraviolet Action, called the plan “dangerous and reckless.” Their letter stated, “This move will endanger us all, and particularly give ammunition to scammers, blackmailers, stalkers, child abusers, and authoritarian regimes.”
Internally dubbed “Name Tag,” the feature would allow glasses wearers to identify individuals with public Meta accounts and retrieve information about them using Meta’s AI assistant. According to internal documents reviewed by The New York Times, Meta knew the feature carried “safety and privacy risks.” The company reportedly planned an initial release at a conference for blind attendees. An internal memo from Meta’s Reality Labs noted the company would launch “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
The commercial stakes are high. EssilorLuxottica reported selling more than 7 million pairs of the smart glasses last year. This push follows a history of regulatory penalties for Meta, including a $5 billion FTC settlement in 2019 for privacy violations and billions more to settle lawsuits in Illinois and Texas over unauthorized facial data collection.
A separate investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten peeled back the curtain on how Meta’s AI systems are trained. It found that contractors in Nairobi, Kenya, hired through outsourcing firms like Sama, routinely annotate sensitive images, videos, and audio collected through the glasses. Interviews with more than 30 employees revealed the material often includes highly intimate content. “In some videos you can see someone going to the toilet, or getting undressed,” one worker said. “I don’t think they know, because if they knew they wouldn’t be recording.”
Other annotators described reviewing clips showing nudity, sexual activity, and financial information like visible bank cards. One worker summarized, “We see everything — from living rooms to naked bodies.” They also noted users can record themselves without realizing it. As one annotator put it, “You think that if they knew about the extent of the data collection, no one would dare to use the glasses.”
Meta states that sensitive data are not intended for AI training and claims safeguards like face blurring are in place. However, former employees and current annotators say these protections frequently fail. “The algorithms sometimes miss,” one former Meta worker said, noting faces and bodies can remain visible. The company’s privacy terms state that some data may be reviewed by humans, but experts argue the line between voluntary sharing and automatic collection is dangerously unclear.
The privacy invasion extends beyond data centers. The glasses’ discreet design, resembling ordinary eyewear, makes covert filming simple. A small LED indicator light is meant to signal recording, but online tutorials show how to disable it, and special patches are sold to trick the sensor. This has created a new tool for harassment, with women disproportionately targeted. Kassy Zanjani of Vancouver was secretly recorded by a stranger using the glasses; the video was posted online and viewed tens of thousands of times. “Now, this is on the forefront of my mind,” Zanjani said. “There’s fear, I’m not able to enjoy public spaces.”
Legal recourse is limited. When Zanjani contacted police, she was told her case did not meet the criteria for existing harassment or intimate image laws in Canada. Law professor Wayne MacKay argues legislation must focus on the harm rather than the specific technology. “If the harm is the taking of the information or the posting of the images, that should be the main focus,” he said.
U.S. Senators Ron Wyden and Jeff Merkley have separately demanded Meta explain its facial recognition plans, setting an April 6 deadline for a response. As of now, it is unclear whether Meta has replied. Neither the senators’ offices nor Meta responded to requests for comment from other news outlets.
The story of Meta’s smart glasses is a familiar one in the digital age: a rush to innovate and monetize, followed by belated scrutiny of the human consequences. It reveals a world where a casual conversation in a café or a private moment in a home can become training data for a corporate AI, reviewed by a stranger thousands of miles away. As these glasses sell by the millions, the debate is no longer about a speculative future but about the privacy standards we are willing to accept today. The question for regulators and the public is whether the convenience of a wearable camera is worth the erosion of our collective right to be unseen and unknown.
Sources for this article include: