As children, we knew there were only two options to properly throw a fireball or raise an X-wing off the ground: through the power of our imagination, or through a controller and connected gaming console. Nowadays, the reality is different. Our gadgets can actually enable us to manipulate gaming worlds and everyday devices with the physical movement of our bodies -- no buttons required.
Care to see how motion tracking has evolved over the years? Then take a tour of the gallery for a brief retrospective.
In 1974, computer scientist Myron Krueger began work on his interactive "Videoplace" project. The experience let participants simulate an out-of-body experience by allowing them to view and control the motion of their silhouettes displayed on large screens in front of them. The idea was to have these participants identify with the image as a natural extension of themselves.
By the '80s, Krueger had further refined the concept, and developed computer systems that allowed a user's gestures and movements to interact with virtual objects. Krueger believed this hardware-free form of motion tracking was preferable to the emerging model of virtual reality goggles.
Mattel released the Power Glove controller for the Nintendo Entertainment System in 1989 and quickly sparked popular interest. The device allowed users to control on-screen gaming action using hand movement alone. This was done through the use of ultrasonic transmitters embedded within the glove that communicated with a sensor array placed around the TV screen.
Despite its brief popularity, very few games were tailored for the Power Glove. Its technical shortcomings certainly didn't help either, as the accessory became known more as a gimmick. Regardless, the Power Glove went on to become a beloved footnote in gaming history, leading it to even become a popular meme. There's even a proposed documentary on the device that you can help Kickstart into existence.
Undaunted by the success and failure of the Power Glove, Mattel continued to explore motion control for gaming and in 2000, it teamed up with Intel to release a new line of smart toys. Among them was the Intel Play Me2Cam virtual game system. The computer-mounted camera worked by displaying players' images onto the computer screen, allowing for gesture control and object interaction. Player movements could trigger such actions as bubble popping, or snowboarding.
[Image and logos: Robert Ludemann/eightoeight for Intel]
Sony released its PlayStation 2 EyeToy in 2003, aiming to make a motion-controlled user interface mainstream. The EyeToy functioned similarly to Mattel's Me2Cam. The device's digital camera captured the player's image and superimposed them into the gaming environment. The interface led to a new category of motion-based games that had players whacking away at ninjas and deflecting ping-pong balls with their bare hands. The EyeToy had its limitations, though. It worked best in well-lit rooms with sparse furnishings. Any home setup that strayed from this would end up confusing the camera too much for proper use.
The launch of the Wii in 2006 represents an indisputable high point for Nintendo and the company's storied innovation. Instead of using a camera system like the EyeToy, Nintendo relied on the hand-held Wii Remote (or Wiimote) as its primary source of player input. This Wiimote used accelerometers and optical sensors to track a limited range of gestures. The simple concept was a huge hit with the general public, and helped get the console into traditionally non-gaming households. It also helped get people off the couch and swinging their remotes (haphazardly) around the living room. It's no wonder the Wii came packed with a wrist strap.
That odd-looking white and blue device in the photo might look like a fancy alarm clock, but it's actually a 3D motion-sensing prototype that was developed by PrimeSense. It's also the precursor to the Kinect. PrimeSense caught Microsoft's attention back in 2006 and, a few short years later, its tech became the heart of Project Natal -- now known as Kinect.
See that tiny circuit board in the image? That's the Capri 3D sensor, a scaled-down version of the Kinect tech that PrimeSense aimed more toward environment mapping and augmented reality than interactive gaming. Now, the Capri sensor can be found powering Google's Project Tango devices.
Oh, and Apple bought the company in 2013. PrimeSense, it seems, is onto something good with this motion-tracking stuff.
While the Wiimote was a great toy, others in the industry were looking to advance the technology beyond basic and imprecise waggle. In 2008, a company made up of VR and 3D-modeling enthusiasts called Sixense demoed its TrueMotion 3D controller. It went beyond the accelerometer by leveraging magnetic fields to track the hand position in 3D space with an incredible degree of accuracy.
Sixense went on to partner with Razer on a motion-sensing game controller, which was released in 2011 as the Razer Hydra. In 2014, Sixense made some moves on its own with a demo of its upcoming STEM controller and its virtual reality 3D-modeling software, MakeVR.
Capitalizing on the popular motion-sensing trend that Nintendo had set in motion four years earlier, Sony released its PlayStation Move system for PlayStation 3. Sure, the EyeToy had arrived years before the Wii, but the camera-based tech failed to catch on to the degree that wand-based systems did. The PlayStation's Eye camera does play a role in the Move's motion-tracking capability, though, by working in tandem with the controller to track the location of its glowing LED orb.
Five years after the launch of the Xbox 360, Microsoft released its Kinect accessory. Unlike Nintendo's and Sony's motion-control offerings, the Kinect offered truly hands-free interaction. The peripheral included a camera, audio sensors and motion-sensing tech that could track 48 points of body movement, as well as recognize faces and voices. As Microsoft put it, "You are the controller." But the "controller" did require a good amount of space, with the optimal location being six to eight feet away from the sensor in an uncluttered environment (read: your TV room).
If you thought facial recognition and tracking was just for video games, you'd be mistaken. In 2011, EviGroup demoed its Paddle Pro hands-free tablet. Its front-facing webcam was used to detect and track facial position and movement, translating that into cursor control. Extended stares could even be used to initiate mouse clicks.
Of all the motion-tracking tech that seems to have been inspired by the 2002 film Minority Report, Leap Motion's solution comes the closest at delivering on that sci-fi future. Announced in 2012 and shipped the following year, the Leap Motion offered a truly hands-free computer interface that could track 10 individual finger motions at once.
The Leap Motion makes use of infrared optics and cameras rather than depth sensors like many other devices on the market. Unfortunately, the device's interface currently has limited functionality and reach; only a few compatible apps are available. Touchscreens will have to wait to die another day.
Motion sensors have played a role of convenience in our lives for quite a while -- just imagine opening all those automatic shop doors with your hands full. But user error is always a factor to be considered.
When Nest released its Protect smoke alarms in 2013, it added a wave-to-dismiss feature hoping to save people the trouble of climbing on chairs to silence the high-tech warning system. Unfortunately, the device had trouble differentiating a wave of dismissal from other forms of casual movement; a dangerous defect that prompted Nest to halt sales until a software fix could be implemented.
Like most technologies nowadays, motion sensing has evolved to the point that it can be shrunken down and stuffed into smartphones. Amazon's recent Fire Phone is the best example of this. The smartphone has no less than four "invisible infrared illumination sensors" as well as front- and rear-facing cameras to enable 3D head-tracking. This tech helps it figure out which way you're looking and how close you are to the screen so that you can "look around" objects -- what the company calls Dynamic Perspective.