[ad_1]
Snap CTO Bobby Murphy described the intended result to MIT Technology Review as “computing overlaid on the world that enhances our experience of the people in the places that are around us, rather than isolating us or taking us out of that experience.”
In my demo, I was able to stack Lego pieces on a table, smack an AR golf ball into a hole across the room (at least a triple bogey), paint flowers and vines across the ceilings and walls using my hands, and ask questions about the objects I was looking at and receive answers from Snap’s virtual AI chatbot. There was even a little purple virtual doglike creature from Niantic, a Peridot, that followed me around the room and outside onto a balcony.
But look up from the table and you see a normal room. The golf ball is on the floor, not a virtual golf course. The Peridot perches on a real balcony railing. Crucially, this means you can maintain contact—including eye contact—with the people around you in the room.
To accomplish all this, Snap packed a lot of tech into the frames. There are two processors embedded inside, so all the compute happens in the glasses themselves. Cooling chambers in the sides did an effective job of dissipating heat in my demo. Four cameras capture the world around you, as well as the movement of your hands for gesture tracking. The images are displayed via micro-projectors, similar to those found in pico projectors, that do a nice job of presenting those three-dimensional images right in front of your eyes without requiring a lot of initial setup. It creates a tall, deep field of view—Snap claims it is similar to a 100-inch display at 10 feet—in a relatively small, lightweight device (226 grams). What’s more, they automatically darken when you step outside, so they work well not just in your home but out in the world.
You control all this with a combination of voice and hand gestures, most of which came pretty naturally to me. You can pinch to select objects and drag them around, for example. The AI chatbot could respond to questions posed in natural language (“What’s that ship I see in the distance?”). Some of the interactions require a phone, but for the most part Spectacles are a standalone device.
It doesn’t come cheap. Snap isn’t selling the glasses directly to consumers but requires you to agree to at least one year of paying $99 per month for a Spectacles Developer Program account that gives you access to them. I was assured that the company has a very open definition of who can develop for the platform. Snap also announced a new partnership with OpenAI that takes advantage of its multimodal capabilities, which it says will help developers create experiences with real-world context about the things people see or hear (or say).
Having said that, it all worked together impressively well. The three-dimensional objects maintained a sense of permanence in the spaces where you placed them—meaning you can move around and they stay put. The AI assistant correctly identified everything I asked it to. There were some glitches here and there—Lego bricks collapsing into each other, for example—but for the most part this was a solid little device.
It is not, however, a low-profile one. No one will mistake these for a normal pair of glasses or sunglasses. A colleague described them as beefed-up 3D glasses, which seems about right. They are not the silliest computer I have put on my face, but they didn’t exactly make me feel like a cool guy, either. Here’s a photo of me trying them out. Draw your own conclusions.