"Demo or die." That's the unofficial motto of Meta and it's a bedrock principle espoused by Raymond Lo, the company's CTO. Lo spent a decade under the tutelage of Professor Steve Mann (known to many as the father of wearable computing), and is one of the few to make it through Mann's Ph.D. program at the University of Toronto. As an instructor, Mann requires tangible results on a regular basis from his students' projects, and now, with Lo as CTO and Mann as chief scientist, Meta's operating with the same ethos as it develops
augmented mediated reality headsets. Meta's idea is to meld the real and the digital together in a fully functional computing environment. It wants to augment your reality, and, in fact, mediate it.
We saw a prototype mediated reality headset from Meta a couple months ago, where we witnessed some rudimentary demos: typing in thin air and grabbing and moving digital objects with our hands. Naturally, the company's made some improvements in the interim. The latest prototype hardware has morphed into a slightly more integrated design, but it still has the boxy and rough appearance of a 3D-printed prototype. Which, of course, it is. The Kinect-stuck-atop-a-pair-of-Rec Specs look is only temporary however, as Meta is finally ready to start taking orders for its first production headset, the Meta.01. You can pre-order one for $667.00 on Meta's website, with deliveries set to begin in November. As opposed to the prototype you see in the image above, renders of the commercial device look like a cross between ski goggles and a pair of Oakleys. The magic of Meta doesn't lie in its looks, however.
Mann's influence shows not only in its technology, but also in the terminology Meta uses. You see, the professor isn't a proponent of just augmented reality; he prefers mediated reality. The difference? Augmented reality adds things to your field of view, while mediated reality can add what you want to see and remove that which you don't. For example, a mediated reality system could serve like a sort of real-world ad-blocking system, removing ugly ads or unwanted billboards from view. Mann's custom-built EyeTap headsets have been mediating his own reality for years, and he joined Meta to help bring the technology to others.
Both Lo and CEO Meron Gribetz know that the tech landscape is littered with vaporware that once promised functionality its makers couldn't deliver, but Meta's determined that its technology won't fall into that trap. The company's latest video shows demos of its glasses performing numerous tasks enabled by a three-dimensional, natural UI. From 3D virtual chess matches to sculpting a vase in thin air and tossing it into a 3D printer for construction, the video showcases a futuristic computing environment that lets users interact with both digital constructs and real-world objects seamlessly and intuitively. The software aims to be both functionally efficient and pretty. To accomplish those goals, Meta enlisted the services of Jayse Hanson (the man who designed Iron Man's HUD) and Professor Steve Feiner (a pioneer in augmented reality and 3D user interfaces) to help with designing Meta's UI. It looks great in a produced demo, but those types of experiences aren't computing reality... yet.
Gribetz informed us that the majority of the demos seen in the video (above) would be functional by the end of the year. He estimates that the chess game, for example, is "about 70 percent complete" already, and the 3D-printing demo isn't far from completion, either. "We're starting up a YouTube channel that will give people through-glass views of our applications so that they can see that they're real," says Gribetz.
Meta seeks to be much more than both Oculus Rift and Google Glass -- two other headsets that invite comparisons to Meta despite the fact that those devices serve very different functional goals. Google's headset has a monocular display worn above the user's regular field of vision, and uses far less powerful hardware than what's in Meta's binocular headset. This limits its functionality, as Gribetz says, to being a glorified "notification machine." Meanwhile, the Oculus Rift is a virtual reality headset that separates the wearer from the real world by putting users in a self-contained digital space. Meta lets you see 3D digital objects projected in real space and interact with them, thus mediating your real-world experience.
Featureless-surface tracking is a foundational element to making mediated reality possible, as a system cannot digitally remove things from view if it can't first identify and track them. Most existing augmented reality applications involve some sort of visual marker to tell the AR device where to populate the desired digital additions, like the AR cards for Nintendo's 3DS. With perfectly functioning featureless-surface tracking, everything is a marker -- which means any object or surface is a canvas upon which an augmented or mediated reality construct can be projected. It's what makes the seamless melding of real and digital worlds truly possible.
While we didn't see any of the demos in the video or a pair of the Meta.01 specs, we did get a peek at a prototype device that showcased the bedrock technologies that will enable the kinds of user experiences Meta seeks to provide. "One of the most difficult problems to solve in augmented reality is featureless-surface tracking," according to Gribetz. And he thinks Meta has solved that problem, thanks in large part to the knowledge provided by Lo and Mann. "They've been working on this problem for years, and their involvement has given us access to tracking algorithms that aren't available to anyone else."
We got to see those algorithms work as Lo's prototype glasses tracked a plain, white sheet of paper and displayed a video on it. The system had no trouble keeping the video locked and centered on the sheet as we waved it around, even doing so when the paper was warped and flexed. It even followed the contours of the sheet after it was crumpled up and spread back out. It's an impressive trick that opens up the possibility of overlaying digital data on anything and everything you see.
It also bears mentioning that the system deals well with occlusion, the process by which graphical elements that should not be visible are prevented from being rendered. High-quality occlusion algorithms are an important part of any 3D graphical interface, as they help tremendously in providing users with accurate digital depth perception. For an AR experience to feel natural, any digital constructs depicted in three-dimensional space need to be obstructed by a user's hand when it is positioned in front of those constructs. And, Meta's prototype performed admirably in doing just that. Granted, there was some graphical flicker that occasionally hid our hands, but for the most part, the system tracked our fingers without issue and was able to keep graphics mostly in the background when desired.
Of course, the prototype device we saw accomplished these feats with some hefty hardware -- the headset was tethered to a sizable wearable computer worn around the waist. The headset itself houses a laser 3D depth sensor, plus IR and RGB sensors and an inertial measurement unit capable of detecting movement in nine degrees of freedom. When asked if this is the hardware we'd see in production headsets like Meta.01, Gribetz said that the kinds of sensors and components would be the same as what's in the prototype. But he also declared that he plans to use the best available hardware that suits his needs, and he's not tied to any one specific component. When it comes to the hip-pack computer, well, eventually that will come with a future device and will house serious hardware. "We always want to have the most powerful CPU and GPU on the market," he says.
As for the first commercially available headset, the Meta.01: it'll piggyback on your laptop's hardware, connecting to it via USB -- and it will be 2014 at the earliest before we see any self-contained Meta models. However, Gribetz plans to release new hardware regularly, every six to 12 months, and untethered glasses are on Meta's roadmap, as is shrinking the form factor to something akin to a pair of regular sunglasses. Yet, you shouldn't expect Meta to push out anything like Google Glass, even as its components and designs shrink.
From left, Meta's CTO Raymond Lo, CEO Meron Gribetz and VP of Logistics Karen Kwan
Right now, user experience is the prime developmental design principle, which Gribetz feels is a superior path to the one Google has taken with Glass: low-powered hardware that has a greater focus on form factor than functionality. "Google's gig is building an extension of a smartphone that acts as a notification machine. My gig is building a machine that will enable users to create." The folks at Meta are focused on refining their hand- and surface-tracking technologies, with the next big focus being expanding the glasses' field of view (right now, the display covers only a fraction of what you see). Essentially, Meta wants to make the system work well before it worries about making it small or fashionable -- the component makers and Moore's law will do a lot of that work for Meta, anyway.
In the meantime, expect the demos to continue. Death awaits, otherwise, right?