Context-awareComputing

Latest

  • Nokia toys with context-aware smartphone settings switch, Jigsaw provides better context for apps like this

    by 
    Sean Hollister
    Sean Hollister
    11.27.2010

    If Intel prognosticated correctly, context is the future of apps -- your device's array of sensors will determine where you are and what you're doing, and clever programs will guess from there. Problems arise, however, when one tries to run those accelerometers, microphones, radio antennas and GPS tracking devices constantly on the battery life of an average smartphone and determine what the raw data means, and that's where a group of Dartmouth researchers (and one Nokia scholar) are trying to stake their claim. They've got a bundle of algorithms called Jigsaw for iPhone and Symbian that claims to be able to continually report what you're up to (whether walking, running, cycling or driving) no matter where you place your device, and only pings the sensors as needed based on how active you are. (For better or for worse, Jigsaw also dodges the privacy concerns Intel's cloud-based API might raise by storing all personal data on the phone.) Of course, we've had a very basic version of context-aware functionality for years in apps like Locale for Android and GPS-Action for Symbian -- which modifies your smartphone settings under very specific conditions you specify. Now, Espoo's doing much the same with an app called Nokia Situations. Presently in the experimental stage, Situations is a long ways away from the potential of frameworks like Jigsaw, but here you won't have to wait -- you can download a beta for Symbian^3, S60 5th Edition and S60 3.2 at our source links without further delay.

  • Ubuntu prototype uses face recognition to intelligently move UI elements (video)

    by 
    Darren Murph
    Darren Murph
    09.20.2010

    (function() { var s = document.createElement('SCRIPT'), s1 = document.getElementsByTagName('SCRIPT')[0]; s.type = 'text/javascript'; s.async = true; s.src = 'http://widgets.digg.com/buttons.js'; s1.parentNode.insertBefore(s, s1); })(); Digg Not that we haven't seen mock-ups before for systems using webcams to intelligently move user interface elements, but it's another thing entirely for a company to make a public proclamation that it's tinkering with implementing something of the sort into a future build of its OS. Over at the Canonical design blog, one Christian Giordano has revealed that the company is in the early stages of creating new ways to interact with Ubuntu, primarily by using proximity and orientation sensors in order to have one's PC react based on how they're sitting, where they're sitting and where their eyes / head are at. For instance -- once a user fires up a video and leans back, said video would automatically go into fullscreen mode. Similarly, if a user walked away to grab some coffee and a notification appeared, that notification would be displayed at fullscreen so that he / she could read it from faraway. There's no mention just yet on when the company plans to actually bring these ideas to end-users, but the video embedded after the break makes us long for "sooner" rather than "later."