Meta has a serious problem. New reports show that recordings from smart glasses ended up with people who were watching very private situations of users. The matter has already sparked lawsuits and investigations, and the scale of the problem is larger than initially thought.
Recordings from home were sent for analysis
Data revealed by the media indicates that Meta used an external company to analyse recordings. Employees were tasked with tagging objects, but in practice, they saw much more. The materials included intimate situations, daily life, and private moments that users probably did not want to share. The issue is serious, as many people may not even have realised what exactly their devices were recording.
Not just image - audio was also analysed
This is not the end. Meta also analysed audio recordings to develop its AI. This means that user conversations could have been processed, even if they concerned sensitive topics. Moreover, there is no straightforward way to completely opt-out of this type of analysis if one wants to use AI features.
The matter quickly reached the courts. A lawsuit was filed in the USA, and an investigation was launched in the UK. Experts emphasise that in this model, data is the most important thing, not the equipment. In practice, this means that the user and their environment become a source of information for the company. Meta claims that it employs similar practices to other companies and filters data, but this does not reassure critics.
The scandal surrounding smart glasses reveals the significant risks associated with the development of AI and wearable devices. The boundary between convenience and privacy is becoming increasingly thin, and users may not have full control over what information reaches technology companies.
Source: Svenska Dagbladet, TechCrunch, flatpanelshd














