MicrosoftResearch

Latest

  • Microsoft HoloDesk: when you need to juggle something that isn't there (video)

    by 
    Daniel Cooper
    Daniel Cooper
    10.20.2011

    If you'd thought that OmniTouch and PocketTouch were the end of Microsoft Research's natural user interface projects, think again. It's now released a video of the HoloDesk, a tool that lets you manipulate virtual 3D objects with your bare hands. Looking through a transparent display, the objects react nearly instantly, rolling from a sheet of real paper into a real cup and falling into shadow if you block the virtual light-source. The Cambridge lab that developed the tool sees uses in remote working, collaboration or device prototyping. If you hadn't guessed, there's a hacked Kinect at the heart of HoloDesk's DNA, which makes us wonder how long it'll be before we can use it to play Halo.

  • OmniTouch projection interface makes the world your touchscreen (video)

    by 
    Mat Smith
    Mat Smith
    10.18.2011

    Sometimes you just want to make notes on your forearm. Put that permanent marker down though, because PhD student Chris Harrison et al at Microsoft Research have created a new system that allows touchscreen interaction on hairy and uneven surfaces. It uses a short-range depth camera instead of the infrared sensor we've seen on similar devices, which allows it to gauge the viewing angle and other characteristics of surfaces being used -- and it can even handle pinch-to-zoom. There's a video after the break, if you fancy a bit of wall-based digital finger painting.

  • Microsoft's PocketTouch prototype is like x-ray vision for your fingers (video)

    by 
    Amar Toor
    Amar Toor
    10.18.2011

    Is it more gauche to pull out your phone in the middle of a date, or to draw a bunch of crop circles on your pants? That's the question we were asking ourselves after coming across PocketTouch -- a new Microsoft Research prototype that lets you manipulate your handset without ever removing it from your pocket. Developed by researchers Scott Saponas, Chris Harrison and Hrvoje Benko, the device essentially consists of a customized, multitouch capacitive sensor hooked on to the back of a smartphone. This sensor is capable of picking up gestures through fabric, allowing users to execute a wide array of eyes-free, gesture-based functions (including simple swipes and alphanumeric text) without ever having to actually whip out their phones. To do this, the team implemented what it calls an "orientation-defining unlock gesture," which helps the prototype get its bearings, before testing the capacitive sensors across different fabrics. According to Microsoft, the outcome "exceeded expectations," though there's no word on when or if this Goliath of a device could ever hit the mainstream. Head past the break to see a video of a man playing tic-tac-toe on his pants.

  • Microsoft Research celebrates 20 years of crazy innovation

    by 
    Brian Heater
    Brian Heater
    09.28.2011

    Microsoft Research was founded way back in 1991 as a way of turning cutting edge concepts into products. Over the years, the division has been behind some of the most exciting ideas that have come out of Redmond, from fluffy mice to HIV / AIDS research. The department is celebrating its 20th anniversary by highlighting some of its favorite projects over the next four weeks, so we're beating it to the punch with some of picks. Check out our list below.

  • Microsoft Surface-controlled robots to boldly go where rescuers have gone before (video)

    by 
    Joseph Volpe
    Joseph Volpe
    08.11.2011

    Ready to get hands-on in the danger zone -- from afar? That's precisely what an enterprising team of University of Massachusetts Lowell researchers are working to achieve with a little Redmond-supplied assistance. The Robotics Lab project, dubbed the Dynamically Resizing Ergonomic and Multi-touch (DREAM) Controller, makes use of Microsoft's Surface and Robotics Developer Studio to deploy and coordinate gesture-controlled search-and-rescue bots for potentially hazardous emergency response situations. Developed by Prof. Holly Yanco and Mark Micire, the tech's Natural User Interface maps a virtual joystick to a user's fingertips, delegating movement control to one hand and vision to the other -- much like an Xbox controller. The project's been under development for some time, having already aided rescue efforts during Hurricane Katrina, and with future refinements, could sufficiently lower the element of risk for first responders. Head past the break for a video demonstration of this life-saving research.

  • Researchers demo 3D face scanning breakthroughs at SIGGRAPH, Kinect crowd squarely targeted

    by 
    Darren Murph
    Darren Murph
    08.10.2011

    Lookin' to get your Grown Nerd on? Look no further. We just sat through 1.5 hours of high-brow technobabble here at SIGGRAPH 2011, where a gaggle of gurus with IQs far, far higher than ours explained in detail what the future of 3D face scanning would hold. Scientists from ETH Zürich, Texas A&M, Technion-Israel Institute of Technology, Carnegie Mellon University as well as a variety of folks from Microsoft Research and Disney Research labs were on hand, with each subset revealing a slightly different technique to solving an all-too-similar problem: painfully accurate 3D face tracking. Haoda Huang et al. revealed a highly technical new method that involved the combination of marker-based motion capture with 3D scanning in an effort to overcome drift, while Thabo Beeler et al. took a drastically different approach. Those folks relied on a markerless system that used a well-lit, multi-camera system to overcome occlusion, with anchor frames acting as staples in the success of its capture abilities. J. Rafael Tena et al. developed "a method that not only translates the motions of actors into a three-dimensional face model, but also subdivides it into facial regions that enable animators to intuitively create the poses they need." Naturally, this one's most useful for animators and designers, but the first system detailed is obviously gunning to work on lower-cost devices -- Microsoft's Kinect was specifically mentioned, and it doesn't take a seasoned imagination to see how in-home facial scanning could lead to far more interactive games and augmented reality sessions. The full shebang can be grokked by diving into the links below, but we'd advise you to set aside a few hours (and rest up beforehand). %Gallery-130390%

  • Microsoft's designing women want to dress you up in wearable tech love (video)

    by 
    Joseph Volpe
    Joseph Volpe
    08.09.2011

    Microsoft's no slouch when it comes to market expansion, with personal computing, mobile and even gaming under its Redmond wing -- but fashion? Well, it's time for pigs to fly because two of MS' very own took home Best Concept and Best in Show for their Printing Dress creation at the 15th Annual International Symposium on Wearable Computers. The dress, created by MS Research's Asta Roseway and the Xbox division's Sheridan Martin Small, incorporates a laptop, projector, four circuit boards and laser-cut, typewriter-shaped buttons into a black and white rice paper design. Wondering what all the gadgetry is for? Stressing the need for accountability in our age of anonymous, digital communication, the duo's winning entry aims to have us all wearing what we tweet -- literally, as messages typed via the bodice-sewn keys display on the gown's lower half. It might seem a far-fetched goal now, but these "eRenaissance women" hope to lure tech back from the "cold, unyielding" brink and into the warmth of a "human age." Jump past the break for a video peek at this ethical couture.

  • Microsoft's KinectFusion research project offers real-time 3D reconstruction, wild AR possibilities

    by 
    Darren Murph
    Darren Murph
    08.09.2011

    It's a little shocking to think about the impact that Microsoft's Kinect camera has had on the gaming industry at large, let alone the 3D modeling industry. Here at SIGGRAPH 2011, we attended a KinectFusion research talk hosted by Microsoft, where a fascinating new look at real-time 3D reconstruction was detailed. To better appreciate what's happening here, we'd actually encourage you to hop back and have a gander at our hands-on with PrimeSense's raw motion sensing hardware from GDC 2010 -- for those who've forgotten, that very hardware was finally outed as the guts behind what consumers simply know as "Kinect." The breakthrough wasn't in how it allowed gamers to control common software titles sans a joystick -- the breakthrough was the price. The Kinect took 3D sensing to the mainstream, and moreover, allowed researchers to pick up a commodity product and go absolutely nuts. Turns out, that's precisely what a smattering of highly intelligent blokes in the UK have done, and they've built a new method for reconstructing 3D scenes (read: real-life) in real-time by using a simple Xbox 360 peripheral. The actual technobabble ran deep -- not shocking given the academic nature of the conference -- but the demos shown were nothing short of jaw-dropping. There's no question that this methodology could be used to spark the next generation of gaming interaction and augmented reality, taking a user's surroundings and making it a live part of the experience. Moreover, game design could be significantly impacted, with live scenes able to be acted out and stored in real-time rather than having to build something frame by frame within an application. According to the presenter, the tech that's been created here can "extract surface geometry in real-time," right down to the millimeter level. Of course, the Kinect's camera and abilities are relatively limited when it comes to resolution; you won't be building 1080p scenes with a $150 camera, but as CPUs and GPUs become more powerful, there's nothing stopping this from scaling with the future. Have a peek at the links below if you're interested in diving deeper -- don't be shocked if you can't find the exit, though. %Gallery-130263%

  • Microsoft Research's .NET Gadgeteer steps out into the light, shoots daggers at Arduino (video)

    by 
    Joseph Volpe
    Joseph Volpe
    08.03.2011

    Arduino, meet .NET Gadgeteer -- your newest homebrew hacking rival. Born from Microsoft Research engineers' desire to build prototypes quickly and easily, the two-and-a-half-years in the making ARM7-powered mainboard packs 4MB Flash, 16MB RAM, Ethernet, WiFi, an SD card interface and USB ports. Novice mods might wanna put the Netduino down because this homespun alternative also runs atop MS' .NET Micro Framework, and thanks to its IntelliSense feature, makes auto-coding a breeze for first-timers. If you're interested in what Ballmer & co. are offering, you can head to the project's site now to pre-order its GHI-made hardware: a $250 Spider Starter Kit and the $120 Spider Mainboard. Both will be available by the end of September, but if you need a preview of what this burgeoning open source community has to offer, peep the stop-motion making mod after the break.

  • Microsoft Research-backed e-reader prototype can't keep its text to itself (video)

    by 
    Brian Heater
    Brian Heater
    05.14.2011

    We've seen plenty of dual-screen devices over the past couple of years, and they never fail to make us a little sentimental for Microsoft's stillborn Courier concept. That goes double for this reader device, which made an appearance at this week's CHI conference in Vancouver, seeing as how Microsoft Research apparently played a role in its development. But this gadget, presented by the University of Maryland's Nicholas Chen, is clearly its own beast -- and it's an awesome looking one at that. The reader actually only has one screen, but it can connect wirelessly with other units, letting the users do things like send links between devices. It will also clip magnetically to another unit, so you can look at two pages of the same document at the same time, just like one of those oldfangled book-type things. Fans of awkward intros, check out the video after the break. [Thanks, Winston]

  • Microsoft motion controller concept kicks sand in Kinect's puny face

    by 
    Brian Heater
    Brian Heater
    05.11.2011

    Think your body's a temple? Turns out it's actually just the antenna the temple's staff uses to watch football when they're done praying. A group of engineers from Microsoft Research showcased a technology at Vancouver's Conference on Human Factors in Computing that offers gesture-based control on a scale that could make the company's Kinect controller downright laughable. The team demonstrated how it could harness the human body's reception of electromagnetic noise to create gesture-based computer interaction that does away with the need for a camera -- though a receiver is worn on the body (the neck, in this case). The system uses the unique signals given off in different parts of the home to help measure the interaction, effectively turning one's walls into giant control pads, which can regulate things like lighting and the thermostat. Hopefully games, too, because we can't wait to play Pac-Man with our bedrooms.

  • Microsoft's Rock and Rails touchscreen lets you massage your photos with both hands

    by 
    Amar Toor
    Amar Toor
    05.11.2011

    If you ever get tired of poking away at your smartphone's screen like a doorbell, you're not alone. The forward-looking folks over at Microsoft Research have been working away at a new touchscreen system designed pick up on more natural, whole-hand movements, effectively allowing users to break free from the finger-based paradigm that governs most tactile devices. Developed in coordination with engineers at Microsoft Surface, the company's Rock and Rails interface can detect three basic hand gestures: a balled fist, which holds items on the screen, an extended hand that can align objects (see the cell marked "d," on the right) and a curved paw, around which users can pivot images (see cell b). This taxonomy opens up new ways for users to crop, re-size or generally play around with their UI elements, though it remains unclear whether the display will trickle down to the consumer level anytime soon. For now, it appears to operate exclusively on the Surface, but more details should surface when the system's developers release a paper on their project, later this year. Hit the source links to see a video of the thing in action.

  • Microsoft's Bill Buxton exhibits gadget collection 35 years in the making

    by 
    Donald Melanson
    Donald Melanson
    05.09.2011

    You don't get to be Microsoft's Principal Researcher without a strong sense of technology and design history, and Bill Buxton certainly has plenty of evidence to show he's well qualified in that respect. That swath of devices pictured above is just a sample of the impressive gadget collection Buxton has amassed over the past 35 years, which he is now exhibiting in public for the first time at a conference in Vancouver, British Columbia this week. Not able to check it out in person? Then you can thankfully settle for the next best thing, as Microsoft Research has also put the entire collection online, complete with Buxton's own notes for each of the items (which range from Etch-a-Sketches to watches to a range of different input devices). Hit up the source link below to start browsing.

  • TouchStudio from Microsoft Research tests users' willingness to code solely on their phone

    by 
    Zachary Lutz
    Zachary Lutz
    04.13.2011

    While touchscreens bring imagery and ideas to unprecedented personal levels, unsurprisingly, they have remained entirely inadequate for building the programs that enable this humane experience. Now, a project from Microsoft Research aims to shatter this axiom with the TouchStudio development environment for Windows Phone. Enterprising coders may get their hands on the initial release of this paradigm buster in the Marketplace, where they're invited to try their hand at coding applications using only fingers on glass. The SDK includes a handful of sample scripts to get you going, along with the proper hooks to access many of the phone's built-in sensors. While this dev kit won't let you to build the next killer app by simply dragging and poking haphazardly, if you happen to prove us wrong, we really want to hear about it. [Thanks, Fred T.]

  • Switched On: Pen again

    by 
    Ross Rubin
    Ross Rubin
    04.10.2011

    Each week Ross Rubin contributes Switched On, a column about consumer technology. Last week's Switched On discussed how some next wave notions from a decade ago were trying to reinvent themselves. Here's one more. Surging smartphone vendor HTC is seeking to bring back an input method that many wrote off long ago with its forthcoming Flyer tablet and EVO View 4G comrade-in-arms: the stylus. A fixture of early Palm and Psion PDAs, Pocket PCs and Windows Mobile handsets, slim, compact styli were once the most popular thing to slip down a well since Timmy. Then, users would poke the cheap, simple sticks at similarly inexpensive resistive touchscreens. After the debut of tablet PCs, though, more companies started to use active digitizer systems like the one inside the Flyer. Active pens offer more precision, which can help with tasks such as handwriting recognition, and support "hovering" above a screen, the functional equivalent of a mouseover. On the other hand, they are also thicker, more expensive, and need to be charged. (Update: as some have pointed out in comments, Wacom's tablets generate tiny electromagnetic fields that power active digitization, and don't require the pen to store electricity itself.) And, of course, just like passive styli, active pens take up space and can be misplaced. The 2004 debut of the Nintendo DS -- the ancestor of the just-released 3DS -- marked the beginning of what has become the last mass-market consumer electronics product series to integrate stylus input. The rising popularity of capacitive touch screens and multitouch have replaced styli with fingers as the main user interface elements. Instead of using a precise point for tasks such as placing an insertion point in text, we now expand the text dynamically to accommodate our oily instruments. On-screen buttons have also grown, as have the screens themselves, all in the name of losing a contrivance.

  • Microsoft's SpecNet promises to seek out unused wireless spectrum

    by 
    Donald Melanson
    Donald Melanson
    03.28.2011

    Microsoft's been toying around with hardware for so-called white space spectrum for some time now, and it's now back with another fairly ambitious scheme. Dubbed "SpecNet," the hardware in this case in actually a network of spectrum analyzers that would seek out and map where spectrum is available and where it's not, and let unlicensed devices use it when it's available. Of course, that's still all a bit theoretical, and it does face a few significant hurdles. Those spectrum analyzers, for instance would cost between $10,000 and $40,000 apiece, and you'd obviously need a lot of them for a nationwide network, although Microsoft suggests that they could be set up on an ad hoc basis and assigned to different areas for a specific time period. Those interested in the finer technical details can dive into Microsoft's full paper on the subject at the source link below.

  • Microsoft researchers show off intuitive stylus, don't know how to hold a pencil (video)

    by 
    Christopher Trout
    Christopher Trout
    03.10.2011

    At this week's Microsoft promotional bonanza, otherwise known as TechFest 2011, a team of researchers debuted a rather shabby looking capacitive stylus that switches between functions based on your grip -- an interesting addition to a rather stagnant market, sure, but there are still a few kinks to be worked out. The multi-purpose tool enlists capacitive multi-touch and orientation sensors to respond to how you hold the thing, allowing you to perform a number of different tasks with a simple repositioning. A demo video of the stylus at work shows a disembodied hand switching between a pen, an airbrush, a compass, and even a virtual flute with ease, but while the project stresses the "naturalness" of the experience, we're pretty sure nobody sketches quite like that. Check out the video after the break to see what we mean.

  • Microsoft Research shows off next-generation gesture interfaces, Kinect integration, other neato stuff (video)

    by 
    Tim Stevens
    Tim Stevens
    02.25.2011

    Leave it to Microsoft Research to show off some cool stuff that may or may not actually happen on any thing you ever actually buy. Check out the video after the break to see the latest, Director of Microsoft Applied Sciences Steven "Stevie B" Bathiche showing off a variety of interesting interfaces. It all starts with gesture controls that take you well beyond the touchscreen, relying on a retro-reflective sash and a camera to detect hand position. But, things quickly progress to a flat lens called a wedge that can enable holographic-like imagery. Pair that with a Kinect and perspective shifts come into play, tracking your face to enable you to peer around like looking out a window. It's all just waiting for you below -- and maybe IRL sometime in the future.

  • Microsoft Research teases Windows Phones controlling Surfaces and crazy desktop UIs

    by 
    Chris Ziegler
    Chris Ziegler
    02.25.2011

    Hey, look, at this point, we just want ourselves some good, old-fashioned copy and paste -- but we'll give Microsoft some credit for looking a year (or two, or ten) beyond that watermark at what could be coming down the pike for human-machine interaction -- and specifically, how phones could play a role. In a presentation and promotional video pulled together this week, Microsoft Research boss Craig Mundie shows how you could tilt your smartphone to control a bubbly, colorful look into your personal life on your desktop machine and how you could snap a photo and then drop the handset onto a Surface for instant transfer (perhaps a bit like HP's Touch to Share), among other gems. Of course, this is all pure research at this point -- it's any guess whether these comments could make the jump to production, and if so, when -- but it's fun to watch. Follow the break for video. [Thanks, Jake]

  • Bing 2.0 brings better Facebook integration and the impressive Streetside to iPhone (video)

    by 
    Thomas Ricker
    Thomas Ricker
    12.16.2010

    Microsoft just released -- or should we say, Apple just approved -- version 2.0 of the Bing search app for iOS devices. In addition to several other new features including integrated Facebook Likes on search results (really!?) and in-app checkins to Facebook and Foursquare, Bing now comes packing Streetside, something that first blew us away as Street Slide when it was still in the labs at Microsoft Research. Unlike Google's Streetview that requires a lot of forward- and back-clicking and turning in order to get a feel for a location, Streetslide provides a more comprehensive view of the shops and businesses in an area by letting you strafe down the sidewalk while zooming in and out of the buildings located on each side of the street. We took it for a brief spin (literally) and came away impressed. You won't find Streetside implemented for all locations yet (for example, San Francisco's Make-out Room was found on Streetside but the Slanted Door restaurant wasn't) but they do seem to have large swaths of major cities covered based on our brief testing of Chicago, Seattle, New York, and San Francisco. Sorry, nothing yet in London and Amsterdam but maybe you'll have better success searching your own neighborhoods. See the full list of what's new after the break in addition to a Streetside demo from Bing's architect Blaise Aguera y Arcas -- unfortunately, we're not seeing the impressive Panaroma feature he mentions in this release. Update: We've been told that Facebook Likes, like Panaroma, like totally didn't make it into the app release. It's a web search results feature only for the time being.