medialab

Latest

  • MIT's CityHome turns tight spaces into futuristic abodes

    by 
    Chris Velazco
    Chris Velazco
    05.28.2014

    Pulp sci-fi novels have painted a picture of a bleak future, with dense, dystopian urban sprawl forcing us into ever-shrinking living spaces. Such ignominious abodes would probably benefit from something MIT Media Lab's Changing Places team has been working on. It's called CityHome, and it's a concept that could turn even the most modest studio apartment into a space befitting the stylish futurist lurking in us all.

  • Augmented reality concept uses Google Glass to make reading the newspaper more like... reading a website

    by 
    Mat Smith
    Mat Smith
    03.25.2014

    As part of the Wearable Tech Expo 2014 in Tokyo, the Asahi Shimbun is looking to offer richer content to users still reading its dead tree editions. The 'AIR' concept uses wearable du jour Google Glass to both detect physical markers and display any digital companion content. According to Asahi's Media Lab, the concept's aim is to better broadcast and convey "emotional" content: a picture of Winter Olympics skater Asada Mao gets picked up and Google Glass barrels into a slideshow, alongside a stirring soundtrack. (She had announced her retirement, and apparently her many fans were very upset.)

  • MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.09.2012

    Glasses-free 3D may be the next logical step in TV's evolution, but we have yet to see a convincing device make it to market that doesn't come along with a five-figure price tag. The sets that do come within range of tickling our home theater budgets won't blow you away, and it's not unreasonable to expect that trend to continue through the next few product cycles. A dramatic adjustment in our approach to glasses-free 3D may be just what the industry needs, so you'll want to pay close attention to the MIT Media Lab's latest brew. Tensor Displays combine layered low-cost panels with some clever software that assigns and alternates the image at a rapid pace, creating depth that actually looks fairly realistic. Gordon Wetzstein, one of the project creators, explained that the solution essentially "(takes) the complexity away from the optics and (puts) it in the computation," and since software solutions are far more easily scaled than their hardware equivalent, the Tensor Display concept could result in less expensive, yet superior 3D products. We caught up with the project at SIGGRAPH, where the first demonstration included four fixed images, which employed a similar concept as the LCD version, but with backlit inkjet prints instead of motion-capable panels. Each displaying a slightly different static image, the transparencies were stacked to give the appearance of depth without the typical cost. The version that shows the most potential, however, consists of three stacked LCD panels, each displaying a sightly different pattern that flashes back and forth four times per frame of video, creating a three-dimensional effect that appears smooth and natural. The result was certainly more tolerable than the glasses-free 3D we're used to seeing, though it's surely a long way from being a viable replacement for active-glasses sets -- Wetzstein said that the solution could make its way to consumers within the next five years. Currently, the technology works best in a dark room, where it's able to present a consistent image. Unfortunately, this meant the light levels around the booth were a bit dimmer than what our camera required, resulting in the underexposed, yet very informative hands-on video you'll see after the break.%Gallery-162096%

  • MIT projection system extends video to peripheral vision, samples footage in real-time

    by 
    Alexis Santos
    Alexis Santos
    06.25.2012

    Researchers at the MIT Media Lab have developed an ambient lighting system for video that would make Philips' Ambilight tech jealous. Dubbed Infinity-by-Nine, the rig analyzes frames of footage in real-time -- with consumer-grade hardware no less -- and projects rough representations of the video's edges onto a room's walls or ceiling. Synchronized with camera motion, the effect aims to extend the picture into a viewer's peripheral vision. MIT guinea pigs have reported a greater feeling of involvement with video content when Infinity-by-Nine was in action, and some even claimed to feel the heat from on-screen explosions. A five screen multimedia powerhouse it isn't, but the team suggests that the technology could be used for gaming, security systems, user interface design and other applications. Head past the jump to catch the setup in action.

  • EyeRing finger-mounted connected cam captures signs and dollar bills, identifies them with OCR (hands-on)

    by 
    Zach Honig
    Zach Honig
    04.25.2012

    Ready to swap that diamond for a finger-mounted camera with a built-in trigger and Bluetooth connectivity? If it could help identify otherwise indistinguishable objects, you might just consider it. The MIT Media Lab's EyeRing project was designed with an assistive focus in mind, helping visually disabled persons read signs or identify currency, for example, while also serving to assist children during the tedious process of learning to read. Instead of hunting for a grownup to translate text into speech, a young student could direct EyeRing at words on a page, hit the shutter release, and receive a verbal response from a Bluetooth-connected device, such as a smartphone or tablet. EyeRing could be useful for other individuals as well, serving as an ever-ready imaging device that enables you to capture pictures or documents with ease, transmitting them automatically to a smartphone, then on to a media sharing site or a server. We peeked at EyeRing during our visit to the MIT Media Lab this week, and while the device is buggy at best in its current state, we can definitely see how it could fit into the lives of people unable to read posted signs, text on a page or the monetary value of a currency note. We had an opportunity to see several iterations of the device, which has come quite a long way in recent months, as you'll notice in the gallery below. The demo, which like many at the Lab includes a Samsung Epic 4G, transmits images from the ring to the smartphone, where text is highlighted and read aloud using a custom app. Snapping the text "ring," it took a dozen or so attempts before the rig correctly read the word aloud, but considering that we've seen much more accurate OCR implementations, it's reasonable to expect a more advanced version of the software to make its way out once the hardware is a bit more polished -- at this stage, EyeRing is more about the device itself, which had some issues of its own maintaining a link to the phone. You can get a feel for how the whole package works in the video after the break, which required quite a few takes before we were able to capture an accurate reading.

  • Perifoveal Display tracks head positioning, highlights changing data on secondary LCDs (hands-on)

    by 
    Zach Honig
    Zach Honig
    04.25.2012

    If there's a large display as part of your workstation, you know how difficult it can be to keep track of all of your windows simultaneously, without missing a single update. Now imagine surrounding yourself with three, or four, or five jumbo LCDs, each littered with dozens of windows tracking realtime data -- be it RSS feeds, an inbox or chat. Financial analysts, security guards and transit dispatchers are but a few of the professionals tasked with monitoring such arrays, constantly scanning each monitor to keep abreast of updates. One project from the MIT Media Lab offers a solution, pairing Microsoft Kinect cameras with detection software, then highlighting changes with a new graphical user interface. Perifoveal Display presents data at normal brightness on the monitor that you're facing directly. Then, as you move your head to a different LCD, that panel becomes brighter, while changes on any of the displays that you're not facing directly (but still remain within your peripheral vision) -- a rising stock price, or motion on a security camera -- are highlighted with a white square, which slowly fades once you turn to face the new information. During our hands-on demo, everything worked as described, albeit without the instant response times you may expect from such a platform. As with most Media Lab projects, there's no release date in sight, but you can gawk at the prototype in our video just after the break.

  • DIY Cellphone has the footprint of an ice cream sandwich, definitely doesn't run ICS (hands-on)

    by 
    Zach Honig
    Zach Honig
    04.25.2012

    Building your own wireless communications device isn't for the faint of heart, or the law-abiding -- the FCC tends to prefer placing its own stamp of approval on devices that utilize US airwaves, making a homegrown mobile phone an unlikely proposition. That didn't stop a team at the MIT Media Lab from creating such a DIY kit, however. Meet the Do-It-Yourself Cellphone. This wood-based mobile rig, while it's currently in the prototype phase (where it may indefinitely remain), would eventually ship with a circuit board, control pad, a fairly beefy antenna and a monochrome LCD. Sounds like it'd be right at home at some kid's garage workshop in the early '80s, not showcased at an MIT open house. The argument here is that people spend more time with their phone than with any other device, so naturally they'd want to build one to their liking. Nowadays, folks expect their pocketable handset to enable them to not only place and receive phone calls, but also store phone numbers, offer a rechargeable battery, and, well, in some cases even send and receive email, and surf the web -- none of which are available with such a kit. The prototype we saw was fully functional. It could place calls. It could receive calls. There was even Caller ID! The phone does indeed feel homemade, with its laser-cut plywood case and a design that lacks some of the most basic gadget essentials, like a rechargeable battery (or at very least some provisions for replacing the 9-volt inside without unscrewing the case). Audio quality sounded fine, and calls went out and came in without a hitch -- there's a SIM card slot inside, letting you bring the nondescript phone to the carrier of your choice. Does it work? Yes. Is it worth dropping $100-150 in parts to build a jumbo-sized phone with a microscopic feature set? No, there's definitely nothing smart about the DIY Cellphone. If you want to throw together your own handset, however, and not risk anyone questioning the legitimacy of your homemade claim, you might want to keep an eye out for this to come to market. The rest of you will find everything you need in the video just past the break. We're just happy to have walked away without any splinters.

  • OLED Display Blocks pack six 128 x 128 panels, we go hands-on at MIT (video)

    by 
    Zach Honig
    Zach Honig
    04.24.2012

    How do you develop an OLED display that gives a 360-degree perspective? Toss six 1.25-inch panels into a plastic cube, then turn it as you see fit. That's an overly simplistic explanation for the six-sided display on hand at the MIT Media Lab today, which is quite limited in its current form, but could eventually serve an enormous variety of applications. Fluid Interfaces Group Research Assistant Pol Pla i Conesa presented several such scenarios for his Display Blocks, which consist of 128 x 128-pixel OLED panels. Take, for example, the 2004 film Crash, which tells interweaving stories that could be presented simultaneously with such a display -- simply rotate the cube until you land on a narrative you'd like to follow, and the soundtrack will adjust to match. It could also go a long way when it comes to visualizing data, especially when in groups -- instead of virtually constructing profiles of individuals who applied for a slot at MIT, for example, or segments of a business that need to be organized based on different parameters, you could have each assigned to a cube, which can be tossed into an accepted or rejected pile, and repositioned as necessary. Imagine having a group of display cubes when it comes time to plan the seating chart for a reception -- each cube could represent one individual, with a color-coded background and a name or photo up top, with different descriptive elements on each side. The same could apply to products at monstrous companies like Samsung or Sony, where executives need to make planning decisions based on product performance, and could benefit greatly from having all of the necessary information for a single gadget listed around each cube. On a larger scale, the cubes could be used to replace walls and floors in a building -- want to change the color of your wallpaper? Just push a new image to the display, and dedicate a portion of the wall for watching television, or displaying artwork. You could accomplish this with networked single-sided panels as well, but that wouldn't be nearly as much fun. The Media Lab had a working prototype on display today, which demonstrated the size and basic functionality, but didn't have an adjustable picture. Still, it's easy to imagine the potential of such a device, if, of course, it ever becomes a reality. As always, you'll find our hands-on demo just past the break.

  • Droplet and StackAR bring physical interface to virtual experiences, communicate through light (hands-on)

    by 
    Zach Honig
    Zach Honig
    04.24.2012

    Light-based communication seems to wind throughout the MIT Media Lab -- it is a universal language, after all, since many devices output light, be it with a dedicated LED or a standard LCD, and have the capacity to view and interpret it. One such device, coined Droplet, essentially redirects light from one source to another, while also serving as a physical interface for tablet-based tasks. Rob Hemsley, a research assistant at the Media Lab, was on hand to demonstrate two of his projects. Droplet is a compact self-contained module with an integrated RGB LED, a photodiode and a CR1216 lithium coin battery -- which provides roughly one day of power in the gadget's current early prototype status. Today's demo used a computer-connected HDTV and a capacitive-touch-enabled tablet. Using the TV to pull up a custom Google Calendar module, Hemsley held the Droplet up to a defined area on the display, which then output a series of colors, transmitting data to the module. Then, that data was pushed to a tablet after placing the Droplet on the display, pulling up the same calendar appointment and providing a physical interface for adjusting the date and time, which is retained in the cloud and the module itself, which also outputs pulsing light as it counts down to the appointment time. StackAR, the second project, functions in much the same way, but instead of outputting a countdown indicator, it displays schematics for a LilyPad Arduino when placed on the tablet, identifying connectors based on a pre-selected program. The capacitive display can recognize orientation, letting you drop the controller in any position throughout the surface, then outputting a map to match. Like the Droplet, StackAR can also recognize light input, even letting you program the Arduino directly from the tablet by outputting light, effectively simplifying the interface creation process even further. You can also add software control to the board, which will work in conjunction with the hardware, bringing universal control interfaces to the otherwise space-limited Arduino. Both projects appear to have incredible potential, but they're clearly not ready for production just yet. For now, you can get a better feel for Droplet and StackAR in our hands-on video just past the break.

  • MIT gets musical with Arduino-powered DrumTop, uses household objects as a source of sound

    by 
    Zach Honig
    Zach Honig
    04.24.2012

    Everyone's favorite microcontroller has been a boon among hobbyists and advanced amateurs, but it's also found a home among the brilliant projects at MIT's Media Lab, including a groovy instrument called DrumTop. This modern take on the drum pad delivers Arduino-powered interactivity in its simplest form -- hands-on time with ordinary household objects. Simply place a cup, or a plastic ball, even a business card on the DrumTop to make your own original music. The prototype on display today includes eight pads, which are effectively repurposed speakers that tap objects placed on top, with an FSR sensor recognizing physical pressure and turning it into a synchronized beat. There's also a dial in the center that allows you to speed up or slow down the taps, presenting an adjustable tempo. DrumTop is more education tool than DJ beat machine, serving to teach youngsters about the physical properties of household objects, be it a coffee mug, a CD jewel case or a camera battery. But frankly, it's a lot of fun for folks of every age. There's no word on when you might be able to take one home, so for now you'll need to join us on our MIT visit for a closer look. We make music with all of these objects and more in the video after the break.

  • Newsflash uses high-frequency light to transmit data from iPad to smartphone, we go hands-on (video)

    by 
    Zach Honig
    Zach Honig
    04.24.2012

    MIT's Media Lab is chock-full of cutting-edge tech projects that researchers create, then often license to manufacturers and developers. One such project is called Newsflash, and uses high-frequency red and green light to transmit data to the built-in camera on a receiving device -- in this case Samsung's Epic 4G. The concept is certainly familiar, and functions in much the same way as a QR code, generating flashing light that's invisible to the human eye instead of a cumbersome 2D square. In the Media Lab's implementation, an iPad is used to display a static news page with flashing colored bands at the top, representing just a few vertical pixels on the LCD. As the device presents the standard touch experience you're already familiar with, it also broadcasts data that can be read by any camera, but flashes too quickly to be distracting or even noticeable to the naked eye. A NewsFlash app then interprets those flashes and displays a webpage as instructed -- either a mobile version with the same content, or a translation of foreign websites. As with most MediaLab projects, Newsflash is simply a concept at this point, but it could one day make its way to your devices. Jump past the break to see it in action.

  • Operabots take center stage at MIT Media Lab's 'Death and the Powers' opera

    by 
    Donald Melanson
    Donald Melanson
    03.23.2011

    It already had its premiere in Monaco last year, but composer Tod Machover's new opera, "Death and the Powers," has now finally made it to the United States. Why are we reporting on a new opera (rather than Opera) on Engadget? Well, it just so happens to feature the "Operabots" pictured above, which were developed by MIT's Media Lab. The lab also helped develop some of the opera's other high-tech components, but it seems like the Operabots are the real standout -- they're "semi-autonomous" and freely roam around the stage throughout the opera, acting as a Greek chorus. Not surprisingly, the opera itself also deals with some futuristic subject matter. The Powers of the title is Simon Powers, a "Bill Gates, Walt Disney-type" who decides to upload his consciousness into "The System" before he dies -- hijinks then ensue. Those in Boston can apparently still get tickets for the final performance on March 25th -- after that it moves onto Chicago for four performances between April 2nd and 10th. Head on past the break for a preview.

  • MIT Media Lab gets a multiplicitous new logo (video)

    by 
    Vlad Savov
    Vlad Savov
    03.10.2011

    Logos can be surprisingly divisive things, so the MIT Media Lab has decided to cheat a little bit with its new identity: it won't have just one logo, it'll have 40,000. You heard / read / imagined that right, the new Media Lab logo will simply be the concept of three intersecting "spotlights," composed of three colors, straight lines, three black squares, and a few blending gradients. There's an algorithm behind it all, which is used to generate a unique logo for every new member of staff, meaning that although trademark claims may be a headache to enforce, originality will continue thriving in the Lab for a long time to come. Hit the source link to learn more or leap past the break for a nice video rundown.

  • Kinect hacks let you control a web browser and Windows 7 using only The Force (updated)

    by 
    Thomas Ricker
    Thomas Ricker
    11.25.2010

    Hacking the Xbox 360 Kinect is all about baby steps on the way to what could ultimately amount to some pretty useful homebrew. Here's a good example cooked up by some kids at the MIT Media Lab Fluid Interfaces Group attempting to redefine the human-machine interactive experience. DepthJS is a system that makes Javascript talk to Microsoft's Kinect in order to navigate web pages, among other things. Remember, it's not that making wild, arm-waving gestures is the best way to navigate a web site, it's just a demonstration that you can. Let's hope that the hacking community picks up the work and evolves it into a multitouch remote control plugin for our home theater PCs. Boxee, maybe you can lend a hand? Update: If you're willing to step outside of the developer-friendly borders of open-source software then you'll want to check out Evoluce's gesture solution based on the company's Multitouch Input Management (MIM) driver for Kinect. The most impressive part is its support for simultaneous multitouch and multiuser control of applications (including those using Flash and Java) running on a Windows 7 PC. Evoluce promises to release software "soon" to bridge Kinect and Windows 7. Until then be sure to check both of the impressive videos after the break. [Thanks, Leakcim13]

  • MIT gestural computing makes multitouch look old hat

    by 
    Vlad Savov
    Vlad Savov
    12.11.2009

    Ah, the MIT Media Lab, home to Big Bird's illegitimate progeny, augmented reality projects aplenty, and now three-dimensional gestural computing. The new bi-directional display being demoed by the Cambridge-based boffins performs both multitouch functions that we're familiar with and hand movement recognition in the space in front of the screen -- which we're also familiar with, but mostly from the movies. The gestural motion tracking is done via embedded optical sensors behind the display, which are allowed to see what you're doing by the LCD alternating rapidly (invisible to the human eye, but probably not to human pedantry) between what it's displaying to the viewer and a pattern for the camera array. This differs from projects like Natal, which have the camera offset from the display and therefore cannot work at short distances, but if you want even more detail, you'll find it in the informative video after the break. [Thanks, Rohit]

  • MIT's "sixth sense" augmented reality device demonstrated on video

    by 
    Paul Miller
    Paul Miller
    02.06.2009

    We've got ourselves some video of MIT's new "sixth sense" project, which really helps explain the concept. MIT basically plans to augment reality with a pendant picoprojector: hold up an object at the store and the device blasts relevant information onto it (like environmental stats, for instance), which can be browsed and manipulated with hand gestures. The "sixth sense" in question is the internet, which naturally supplies the data, and that can be just about anything -- MIT has shown off the device projecting information about a person you meet at a party on that actual person (pictured), projecting flight status on a boarding pass, along with an entire non-contextual interface for reading email or making calls. It's pretty interesting technology, that, like many MIT Media Lab projects, makes the wearer look like a complete dork -- if the projector doesn't give it away, the colored finger bands the device uses to detect finger motion certainly might. There are patents already in the works for the technology, which the MIT folks have been working on "night and day" for the past four months, and we're guessing (and hoping) this isn't the last we'll see of this stuff. Video is after the break.

  • Video: TOFU robot probably tastes like chicken

    by 
    Thomas Ricker
    Thomas Ricker
    01.15.2009

    If a Big Bird bender resulted in a bumpin' of nasties with Keepon, well, this would be the genetic result. Meet TOFU, the "squash and stretch" robot with OLED eyes developed by the big brains over at the MIT Media Lab. Tofu applies techniques of social expression long used by 2D animators to explore the impact on robotics. If cute was the goal then we'd call this project a success -- enslave us now oh furry overlords of doom. Video after the break.

  • Seeing the future from the past

    by 
    Mel Martin
    Mel Martin
    12.30.2008

    We'll be seeing a lot of predictions about the immediate future in the coming days. We're not immune here at TUAW and you'll likely get some predictions from your humble bloggers, but it is really interesting to look back and see how our current technology was (or was not) predicted in the past.Here is a link to a talk by Nicholas Negroponte from 1984. At the time, Negroponte was head of the MIT Media Lab, and company CEOs were always taking their people there to see what the future might have to offer. This video is from the year the Macintosh appeared. Negroponte talked about touch screens, high resolution monitors, and the future of user interfaces. It is a fascinating presentation, and his predictions for the most part are right on target. It's almost 30 minutes long, but give it a try and I think you'll find it pretty eye-opening.It isn't easy predicting the future. I remember seeing the General Motors film about the future done for the 1939 Worlds Fair in New York. Most of those predictions were wrong, and very 'Buck Rogers.' Robots doing housework, automated cars and a lot of other things that haven't come to pass, at least not yet.Negroponte, who now is behind the One Laptop Per Child project, has had a very keen eye over time. Many of the things he predicted came to pass in products released by Apple, which have benefited users immensely. [via Funky Space Monkey]

  • Seamless Fashion Show 2006 features iPod-ready couture

    by 
    Fabienne Serriere
    Fabienne Serriere
    02.05.2006

    Seamless V2, the second annual technology fashion event in Boston, included iPod fashion in its wild mix of wearables. Pictured above is iDo, a wedding dress concept by Shannon Okey and Alexandra Underhill. The veil included an iPod (update number two: yes, a shuffle, and the bodice a full-sized iPod) to immerse the bride in her own musical choices. The iDo description further describes the aims of the project:"...the iDo gown takes the so-called Bridezilla where she seemingly wants to go: her very own solitary walk down the aisle, with full control over music only she can hear accessed using touch-sensitive fabric technology ... and a tiara with built-in iPod."If you're less the Bridezilla and more the iPod-extrovert, you may appreciate designer David Lu's iPod Status. A scrolling display for your messenger bag strap, iPod Status shows the world your "Now Playing" status. I love messenger bag strap devices (I've done a few myself) and I think David Lu is onto something with this prototype device. Photo of iPod Status after the jump.