mitmedialab

Latest

  • MIT Media Lab

    MIT researchers turn water into 'calm' computer interfaces

    by 
    Aaron Souppouris
    Aaron Souppouris
    04.24.2018

    Our lives are busy and full of distractions. Modern computing. with its constant notifications and enticing red bubbles next to apps, seems designed to keep us enthralled. MIT Media Lab's Tangible Media Group wants to change that by crafting "calm interfaces." The Tangible Media Group demonstrated a way to precisely transport droplets of liquid across a surface back in January, which it called "programmable droplets." The system is essentially just a printed circuit board, coated with a low-friction material, with a grid of copper wiring on top. By programmatically controlling the electric field of the grid, the team is able to change the shape of polarizable liquid droplets and move them around the surface. The precise control is such that droplets can be both merged and split. Moving on from the underlying technology, the team is now focused on showing how we might leverage the system to create, play and communicate through natural materials.

  • Field Museum of Natural History

    Kinect is pretty great at scanning dino bones

    by 
    David Lumb
    David Lumb
    07.05.2017

    When your fancy high-tech tools aren't suited for the job, it's time to call the tinkers. The Field Museum of Natural History had a certain famous Tyrannosaurus rex skull they wanted examined with 3D imaging systems, but their dental scanners couldn't fit around the dinosaur's massive jaw. They contacted MIT Media Lab's Camera Culture group, which scanned the whole five-foot fossil with a $150 makeshift setup featuring a Microsoft Kinect.

  • Thomas Vogel via Getty Images

    eBay and LinkedIn founders back research into ethical AI

    by 
    Jon Fingas
    Jon Fingas
    01.11.2017

    Some big names in the tech world aren't just fretting over the possibility of dangerous AI, they're taking steps to make sure it doesn't happen. LinkedIn founder Reid Hoffman and eBay founder Pierre Omidyar (through his Omidyar Network) are pouring a total of $20 million into a newly created Ethics and Governance of Artificial Intelligence Fund that will fuel research into the social considerations around AI. The organization wants to be sure that machines aren't just guided by "engineers and corporations" -- they should consider the input of everyone from social scientists through to economists and politicians.

  • ICYMI: Soon flying UAVs could pick stuff up; carry it away

    by 
    Kerry Davis
    Kerry Davis
    09.13.2016

    try{document.getElementById("aol-cms-player-1").style.display="none";}catch(e){}Today on In Case You Missed It: A large format hexacopter with mechanical gripper arms is all set to swoop in on your backyard and move some chairs around. Going by the Prodrone's YouTube video, it can carry 10 kilograms.

  • To stay competitive, Walmart and Target turn to startups for help

    by 
    Andy Meek
    Andy Meek
    08.25.2016

    Ten startup teams are holed up in Minneapolis through next month to use a new retail-focused accelerator there to launch everything from voice-based search technology for retailers to interactive games that help kids learn STEM concepts. Their workspace is a typical startup bullpen -- an open zone filled with things like boxes of food, Apple products, whiteboards with rows of Post-its and signs hanging from the ceiling that mark each startup's turf.

  • ICYMI: Temporary tat yourself for user interface

    by 
    Kerry Davis
    Kerry Davis
    08.16.2016

    try{document.getElementById("aol-cms-player-1").style.display="none";}catch(e){}Today on In Case You Missed It: Microsoft and MIT built a computer interface that drops a touchpad into a shiny, golden temporary tattoo. Just as fantastic as you'd imagine, people can use them to input commands, get notifications and store data like NFC tags.

  • MIT's shape-shifting bot can be a phone, lamp or exoskeleton

    by 
    Aaron Souppouris
    Aaron Souppouris
    11.09.2015

    MIT's Media Lab has created LineFORM, a "Shape Changing Interface" that presents new ways for us to interact with technology. LineFORM is a serpentine robot that has the ability to form a number of shapes, mixing flexibility with rigidity. The lab's Tangible Media Group believes it opens up "new possibilities for display, interaction, and body constraint," and has demonstrated its potential for all three on video.

  • MIT's 'Enigma' system uses bitcoin tricks to share encrypted data

    by 
    Mat Smith
    Mat Smith
    07.01.2015

    The MIT Media Lab and two bitcoin experts have unveiled a prototype encryption system that lets you share it with a third party (or be computed with), without anyone else decrypting it. It means untrusted computers could still be tasked with dealing with sensitive data, but without putting said data at any risk. The trick is called homomorphic encryption, which MIT's Guy Zyskind compares to a black box: "You send whatever data you want, and it runs in the black box and only returns the result. The actual data is never revealed." It does this by hacking up the data into pieces and randomly spreading parts across hundreds of computers in the Enigma (the name of the prototype) network.

  • MIT Media Labs' next hackathon will make breast pumps suck less

    by 
    Mat Smith
    Mat Smith
    09.12.2014

    MIT's Media Lab plans to fix the breastpump. In fact, its fall 2014 hackthon is dedicated to this very aim. Breast pumps are time-consuming, noisy and often painful -- and as the organizers put it (rather TechCrunch-ly): "this is a space that is ripe for further innovation." There's already several problems set to be tackled: the hardened cones that cup over the breasts, the litany of parts, tubes and bottles, as well as a lack of metrics: existing pumps don't offer any information on how much milk is collected, or when. Over at Quartz, they've added their own ideas for the to-do list, including ensuring a closed-system that won't be ruined by water, milk and the inevitable mold, as well as a pumping system that's generally more discrete. It's in fact the second breast pump hackathon, but this sequel will encompass 60-80 engineers, designers and breastfeeding experts -- registration is open.

  • MACH system from MIT can coach those with social anxiety

    by 
    Terrence O'Brien
    Terrence O'Brien
    06.15.2013

    Plenty of people out there have a serious phobia of public speaking and there are tons of other disorders, such as Asperger's, that severely limit a person's ability to handle even simple social interactions. M. Ehsan Hoque, a student at the MIT Media Lab, has made these subjects the focus of her latest project: MACH (My Automated Conversation coacH). At the heart of MACH is a complex system of facial and speech recognition algorithms that can detect subtle nuances in intonation while tracking smiles, head nods and eye movement. The latter is especially important since the front end of MACH is a computer generated avatar that can tell when you break eye contact and shift your attention elsewhere. The software then provides feedback about your performance, helping to prep you for that big presentation or just guide you out of your shell. Experimental data suggests that coaching from MACH could even help you perform better in a job interview. What's particularly exciting is that the program requires no special hardware; it's designed to be used with a standard webcam and microphone on a laptop. So it might not be too long before we start seeing apps designed to help users through social awkwardness. Before you go, make sure to check out the video after the break.

  • Sony nominates three new board members, looks for fresh perspective

    by 
    Jon Fingas
    Jon Fingas
    05.30.2013

    While Sony has been improving its bank balance as of late, most of that turnaround has come through job cuts and office sales -- the company needs new strategies to thrive in the long run. Accordingly, the firm has nominated three new board directors with experience outside of its core electronics divisions. Tim Schaaff (at right) is a relative insider with his board position at media-focused Sony Network Entertainment, but the same can't be said for his two peers. Eikoh Harada (left) has been turning around McDonald's Japan as its CEO, while Joichi Ito (center) is well-known for his roles as the director of MIT's Media Lab and the founder of Digital Garage. Both Harada and Schaaff also worked at Apple several years ago, giving them experience at one of Sony's main rivals. Provided the three become board members at a shareholder meeting on June 20th, they could bring new thinking to a company frequently accused of clinging to business as usual.

  • Eyes-on: MIT Media Lab's Smarter Objects can map a user interface onto... anything (video)

    by 
    Steve Dent
    Steve Dent
    04.29.2013

    While patrolling the halls of the CHI 2013 Human Factors in Computing conference in Paris, we spied a research project from MIT's Media Lab called "Smarter Objects" that turns Minority Report tech on its head. The researchers figured out a way to map software functionality onto tangible objects like a radio, light switch or door lock through an iPad interface and a simple processor / WiFi transceiver in the object. Researcher Valentin Huen explains that "graphical user interfaces are perfect for modifying systems," but operating them on a day-to-day basis is much easier using tangible objects. To that end, the team developed an iPad app that uses motion tracking technology to "map" a user interface onto different parts of an object. The example we saw was a simple radio with a a pair of dials and a speaker, and when the iPad's camera was pointed at it, a circular interface along with a menu system popped up that cannily tracked the radio. From there, Huen mapped various songs onto different positions of the knob, allowing him to control his playlist by moving it -- a simple, manual interface for selecting music. He was even able to activate a second speaker by drawing a line to it, then "cutting" the line to shut it off. We're not sure when, or if, this kind of tech will ever make it into your house, but the demo we saw (see the pair of videos after the break) seemed impressively ready to go.

  • Formlabs FORM 1 high-resolution 3D printer spotted in the wild, we go eyes on (video)

    by 
    James Trew
    James Trew
    10.19.2012

    Last time we checked in with the 3D printing upstarts over at Formlabs, their Kickstarter was doing splendidly, having over doubled its initial funding target. Well, less than a month later, and with the money still rolling in, the current total stands (at time of writing) at a somewhat impressive $2,182,031 -- over 20 times its initial goal. When we heard that the team behind it, along with some all important working printers, rolled into town, how could we resist taking the opportunity to catch up? The venue? London's 3D print show. Where, amongst all the printed bracelets and figurines, the FORM 1 stood out like a sore thumb. A wonderfully orange, and geometrically formed one at that. We elbowed our way through the permanent four-deep crowd at their booth to take a closer look, and as the show is running for another two days, you can too if you're in town. Or you could just click past the break for more.

  • FORM 1 delivers high-end 3D printing for an affordable price, meets Kickstarter goal in 1 day

    by 
    Terrence O'Brien
    Terrence O'Brien
    09.26.2012

    A $2,300 3D printer isn't really anything special anymore. We've seen them as cheap as $350 in fact. But all those affordable units are of the extrusion variety -- meaning they lay out molten plastic in layers. The FORM 1 opts for a method called stereolithography that blasts liquid plastic with a laser, causing the resin to cure. This is one of the most accurate methods of additive manufacturing, but also one of the most expensive thanks to the need for high-end optics, with units typically costing tens-of-thousands of dollars. A group of recent grads from the MIT Media Lab have managed to replicate the process for a fraction of the cost and founded a company called Formlabs to deliver their innovations to the public. Like many other startups, the group turned to Kickstarter to get off the ground and easily passed its $100,000 within its first day. As of this writing over $250,000 had been pledged and the first 25 printers have already been claimed. The FORM 1 is capable of creating objects with layers as thin as 25 microns -- that's 75 percent thinner than even the new Replicator 2. The company didn't scrimp on design and polish to meet its affordability goals either. The base is a stylish brushed metal with the small build platform protected by an orange plastic shell. There's even a companion software tool for simple model creation. You can still get one, though the price of entry is now $2,500, at the Kickstarter page. Or you can simply get a sneak peek in the gallery and video below. %Gallery-166660%

  • MIT Media Lab's Tensor Displays stack LCDs for low-cost glasses-free 3D (hands-on video)

    by 
    Zach Honig
    Zach Honig
    08.09.2012

    Glasses-free 3D may be the next logical step in TV's evolution, but we have yet to see a convincing device make it to market that doesn't come along with a five-figure price tag. The sets that do come within range of tickling our home theater budgets won't blow you away, and it's not unreasonable to expect that trend to continue through the next few product cycles. A dramatic adjustment in our approach to glasses-free 3D may be just what the industry needs, so you'll want to pay close attention to the MIT Media Lab's latest brew. Tensor Displays combine layered low-cost panels with some clever software that assigns and alternates the image at a rapid pace, creating depth that actually looks fairly realistic. Gordon Wetzstein, one of the project creators, explained that the solution essentially "(takes) the complexity away from the optics and (puts) it in the computation," and since software solutions are far more easily scaled than their hardware equivalent, the Tensor Display concept could result in less expensive, yet superior 3D products. We caught up with the project at SIGGRAPH, where the first demonstration included four fixed images, which employed a similar concept as the LCD version, but with backlit inkjet prints instead of motion-capable panels. Each displaying a slightly different static image, the transparencies were stacked to give the appearance of depth without the typical cost. The version that shows the most potential, however, consists of three stacked LCD panels, each displaying a sightly different pattern that flashes back and forth four times per frame of video, creating a three-dimensional effect that appears smooth and natural. The result was certainly more tolerable than the glasses-free 3D we're used to seeing, though it's surely a long way from being a viable replacement for active-glasses sets -- Wetzstein said that the solution could make its way to consumers within the next five years. Currently, the technology works best in a dark room, where it's able to present a consistent image. Unfortunately, this meant the light levels around the booth were a bit dimmer than what our camera required, resulting in the underexposed, yet very informative hands-on video you'll see after the break.%Gallery-162096%

  • MIT projection system extends video to peripheral vision, samples footage in real-time

    by 
    Alexis Santos
    Alexis Santos
    06.25.2012

    Researchers at the MIT Media Lab have developed an ambient lighting system for video that would make Philips' Ambilight tech jealous. Dubbed Infinity-by-Nine, the rig analyzes frames of footage in real-time -- with consumer-grade hardware no less -- and projects rough representations of the video's edges onto a room's walls or ceiling. Synchronized with camera motion, the effect aims to extend the picture into a viewer's peripheral vision. MIT guinea pigs have reported a greater feeling of involvement with video content when Infinity-by-Nine was in action, and some even claimed to feel the heat from on-screen explosions. A five screen multimedia powerhouse it isn't, but the team suggests that the technology could be used for gaming, security systems, user interface design and other applications. Head past the jump to catch the setup in action.

  • MIT researchers teach computers to recognize your smile, frustration

    by 
    Sean Buckley
    Sean Buckley
    05.28.2012

    Wipe that insincere, two-faced grin off your face -- your computer knows you're full of it. Or at least it will once it gets a load of MIT's research on classifying frustration, delight and facial expressions. By teaching a computer how to differentiate between involuntary smiles of frustration and genuine grins of joy, researchers hope to be able to deconstruct the expression into low-level features. What's the use of a disassembled smile? In addition to helping computers suss out your mood, the team hopes the data can be used to help people with autism learn to more accurately decipher expressions. Find out how MIT is making your computer a better people person than you after the break. [Thanks, Kaustubh]

  • ZeroN slips surly bonds, re-runs your 3D gestures in mid-air

    by 
    Steve Dent
    Steve Dent
    05.14.2012

    Playback of 3D motion capture with a computer is nothing new, but how about with a solid levitating object? MIT's Media Lab has developed ZeroN, a large magnet and 3D actuator, which can fly an "interaction element" (aka ball bearing) and control its position in space. You can also bump it to and fro yourself, with everything scanned and recorded, and then have real-life, gravity-defying playback showing planetary motion or virtual cameras, for example. It might be impractical right now as a Minority Report-type object-based input device, but check the video after the break to see its awesome potential for 3D visualization.

  • EyeRing finger-mounted connected cam captures signs and dollar bills, identifies them with OCR (hands-on)

    by 
    Zach Honig
    Zach Honig
    04.25.2012

    Ready to swap that diamond for a finger-mounted camera with a built-in trigger and Bluetooth connectivity? If it could help identify otherwise indistinguishable objects, you might just consider it. The MIT Media Lab's EyeRing project was designed with an assistive focus in mind, helping visually disabled persons read signs or identify currency, for example, while also serving to assist children during the tedious process of learning to read. Instead of hunting for a grownup to translate text into speech, a young student could direct EyeRing at words on a page, hit the shutter release, and receive a verbal response from a Bluetooth-connected device, such as a smartphone or tablet. EyeRing could be useful for other individuals as well, serving as an ever-ready imaging device that enables you to capture pictures or documents with ease, transmitting them automatically to a smartphone, then on to a media sharing site or a server. We peeked at EyeRing during our visit to the MIT Media Lab this week, and while the device is buggy at best in its current state, we can definitely see how it could fit into the lives of people unable to read posted signs, text on a page or the monetary value of a currency note. We had an opportunity to see several iterations of the device, which has come quite a long way in recent months, as you'll notice in the gallery below. The demo, which like many at the Lab includes a Samsung Epic 4G, transmits images from the ring to the smartphone, where text is highlighted and read aloud using a custom app. Snapping the text "ring," it took a dozen or so attempts before the rig correctly read the word aloud, but considering that we've seen much more accurate OCR implementations, it's reasonable to expect a more advanced version of the software to make its way out once the hardware is a bit more polished -- at this stage, EyeRing is more about the device itself, which had some issues of its own maintaining a link to the phone. You can get a feel for how the whole package works in the video after the break, which required quite a few takes before we were able to capture an accurate reading.

  • Perifoveal Display tracks head positioning, highlights changing data on secondary LCDs (hands-on)

    by 
    Zach Honig
    Zach Honig
    04.25.2012

    If there's a large display as part of your workstation, you know how difficult it can be to keep track of all of your windows simultaneously, without missing a single update. Now imagine surrounding yourself with three, or four, or five jumbo LCDs, each littered with dozens of windows tracking realtime data -- be it RSS feeds, an inbox or chat. Financial analysts, security guards and transit dispatchers are but a few of the professionals tasked with monitoring such arrays, constantly scanning each monitor to keep abreast of updates. One project from the MIT Media Lab offers a solution, pairing Microsoft Kinect cameras with detection software, then highlighting changes with a new graphical user interface. Perifoveal Display presents data at normal brightness on the monitor that you're facing directly. Then, as you move your head to a different LCD, that panel becomes brighter, while changes on any of the displays that you're not facing directly (but still remain within your peripheral vision) -- a rising stock price, or motion on a security camera -- are highlighted with a white square, which slowly fades once you turn to face the new information. During our hands-on demo, everything worked as described, albeit without the instant response times you may expect from such a platform. As with most Media Lab projects, there's no release date in sight, but you can gawk at the prototype in our video just after the break.