Meta has a serious problem. New reports show that recordings from smart glasses were ending up in the hands of people who were watching very private situations of users. The issue has already sparked lawsuits and investigations, and the scale of the problem is bigger than previously thought.
Recordings from home were sent for analysis
According to data revealed by the media, Meta employed an external company to analyse recordings. Employees were tasked with tagging objects, but in practice, they saw much more. The footage included intimate situations, daily life, and private moments that users likely did not want to share. The problem is serious, as many people may not have even realised what exactly their devices were recording.
Not just the image - audio has also been analysed
This is not the end. Meta has also analysed audio recordings to develop its AI. This means that users' conversations may have been processed, even if they concerned sensitive topics. What’s worse, there is no simple way to completely opt out of this type of analysis if you want to use AI features.
The matter quickly went to court. In the USA, a lawsuit was filed, and in the UK, an investigation was launched. Experts emphasise that in this model, data is what matters most, not the hardware. In practice, this means that the user and their surroundings become a source of information for the company. Meta claims to employ similar practices to other companies and filters data, but this does not calm critics.
The scandal surrounding smart glasses shows the significant risks posed by the development of AI and wearable devices. The line between convenience and privacy is becoming increasingly thin, and users may not have full control over what data goes to tech companies.
Source: Svenska Dagbladet, TechCrunch, flatpanelshd














