
Today, no one is surprised that every device gets the "smart" label. Phones, TVs, fridges, thermostats, headphones… and now again, glasses. The idea is simple: cram a computer, display, camera, and microphones into an everyday object and pretend it suddenly becomes "more useful." The problem is, it doesn't always come off naturally.
Smartphones and smart TVs have taken off rapidly. But "smart glasses"? They've been a tough nut to crack for years. Most people simply don't wear glasses – some have good eyesight, and the rest opt for contact lenses. So who is the target? Glass wearers? Fans of expensive Ray-Bans? Or is it about transferring phone functions straight to the face?
On top of that, there's the issue of privacy. Cameras and microphones always on – that sounds like a nightmare to many. Google Glass in 2013 even earned the nickname "glassholes" for users who completely ignored conventions and the privacy of others.
What can Ray-Ban Display Glasses do?
The new Meta and Ray-Ban glasses are quite a specific mix. In the right lens, we have a colourful display of 600 × 600 px, a 20° field of view, along with a 12 MP camera, six microphones, and stereo speakers. This isn’t full AR; rather, it’s a small screen that appears when you look slightly to the right.
However, the most interesting addition is the EMG band – the so-called Neural Band. It reads electrical signals from the muscles in your wrist to recognise hand gestures. In theory, it will even allow you to write in the air – although for now, the feature is in ‘beta’ version.
Comparing them to Google Glass: a more powerful processor, more flash memory, faster RAM – but still only 2 GB. Gesture control on the side of the frames is practically the same, and the biggest difference is indeed the wrist band. The catch? Another gadget that needs charging. And if you’re already wearing a smartwatch, then I guess you’ve got a second hand left.
The Eternal Problem – What’s the Point?
Google Glass, Apple Vision Pro and the whole lot had one common issue: no one really knew what they were for. Checking emails or cat photos on a tiny screen? Navigation in the style of “a map in the corner of your eye”? All well and good until you have to reach for your phone to make sure you’re actually heading in the right direction.
Meta is focusing on integration with Meta AI – real-time language recognition, subtitles translation, quick replies. Sounds decent, but a regular smartphone does that just as well, with a much larger screen and better UI.
Old Fears, New Glasses
In the background, the issue of the “panopticon” lingers – that sense that you can be watched at any time and everywhere. Smartphones at least clearly show when someone is recording. Smart glasses? Not always. A tiny LED light? Easy to overlook.
No wonder people react nervously. A video has surfaced on TikTok of a woman who discovered that the person waxing her at the salon was wearing such glasses. Seemingly funny, but it’s not hard to understand why she felt uncomfortable.
The problem is the same as with Google Glass: how to reconcile “innovation” with the fact that others feel they are being filmed without consent? And what if these are your only corrective glasses – do you have to take them off every time you enter a cinema or museum?
Dumber versions of “smart”
Not all smart glasses need to have a camera. We have automatically darkening sunglasses or models that function as an additional screen for a laptop. There are also typical AR designs that don’t raise such controversies. But it’s not them that attract media attention – and it’s not them that carry the label “glassholes”.
Dumber “smart” versions
Not all smart glasses need to have a camera. We have, for example, automatically tinted sunglasses or models that work as an additional screen for a laptop. There are also typical AR structures that do not raise such controversy. But it is not these that attract media attention – and it is not they who carry the label “glassholes”.