MachineLearning

Latest

  • A sheep dog's herding instinct may teach robots a lesson in crowd control

    by 
    Chris Velazco
    Chris Velazco
    08.28.2014

    Here's a noodle-scratcher to occupy yourself with for a few moments: what makes a sheep dog so darned good at rounding up the woolly ruminants they're named after? A possible answer - according to The Telegraph, researchers at Swansea University believe those dogs are constantly searching for and minimizing the gaps between the sheep before it herds them all forward. What's the big deal? Well, those very same researchers think that behavior can be boiled down into an algorithm that could be used to (among other things) program robots to replace those savvy canines. Sure, some old-school shepherds may scoff, but using awkward-looking machines to round up livestock isn't exactly new territory. And if a robot can "understand" how to steer some relatively dumb animals around a field, it stands to reason that logic could be used to guide other organisms around... like humans trying to escape a burning building, for instance. No, really! Swansea University's Dr. Andrew King says there's a whole host of ways to adapt that animal knowledge into robotic know-how, like "crowd control, cleaning up the environment, herding of livestock, [and] keeping animals away from sensitive areas".

  • Competition coaxes computers into seeing our world more clearly

    by 
    Chris Velazco
    Chris Velazco
    08.19.2014

    As surely as the seasons turn and the sun races across the sky, the Large Scale Visual Recognition Competition (or ILSVRC2014, for those in the know) came to a close this week. That might not mean much to you, but it does mean some potentially big things for those trying to teach computers to "see". You see, the competition -- which has been running annually since 2010 -- fields teams from Oxford, the National University of Singapore, the Chinese University of Hong Kong and Google who cook up awfully smart software meant to coax high-end machines into recognizing what's happening in pictures as well as we can.

  • GM wants voice-controlled cars that learn what you really mean

    by 
    Jon Fingas
    Jon Fingas
    07.15.2014

    Voice control is easy to find in cars, but it's not always intuitive. You often have to use specific syntax, which might be hard to remember when you're barreling down the highway. GM may have a smarter approach in store, though. The Wall Street Journal understands that the automaker is working with machine learning firm VocalIQ on an "advanced voice-control system" that would let you control navigation, wipers and other car components in a more intuitive way.

  • Google uses self-aware datacenters to cut the cost of searching

    by 
    Steve Dent
    Steve Dent
    05.28.2014

    Google spits out about 4 million search results per minute (among many other duties), which consumes a lot of energy. According to a recent blog, it cut its electrical bills significantly by applying the same kind of machine learning used in speech recognition and other consumer applications. A data center engineer on a 20 percent project plotted environmental factors like outside air temperature, IT load and other server-related factors. He then developed a neural network that could see the "underlying story" in the data, predicting loads 99.6 percent of the time. With a bit more work, Mountain View managed to eke out significant savings by varying cooling and other factors. It also published a white paper to share the info with other data centers and prove once again that humans are redundant.

  • Microsoft teaches robots how to deal with groups and draw from memory

    by 
    Jon Fingas
    Jon Fingas
    04.09.2014

    Us humans are good at predicting how people will behave, particularly in groups, but artificial intelligence routines still have trouble dealing with much more than controlled, one-on-one discussions. They'll be far more flexible if Microsoft's Situated Interaction project pays off, though. The research initiative has produced sensor-equipped robots that can not only recognize multiple people, but infer their objectives and roles. Office assistants can tell who's speaking and respond in kind, while a lobby robot can call the elevator if it sees a crowd headed in that direction.

  • Facebook's face-recognition tech is almost as good at Stallone-spotting as you are

    by 
    Jamie Rigg
    Jamie Rigg
    03.18.2014

    Facebook's long been interested in facial recognition, as the photo tag-suggestion feature that didn't go down too well in Europe shows. The Zuck's social network also gobbled up a face-recognition outfit in 2012, but it's Facebook's AI research team that's made headway recently with technology that's almost as good as us meatsacks at identifying mugs. Known as DeepFace, the system uses a "nine-layer deep neural network" that's been taught to pick up on patterns by looking at over 4 million photos of more than 4,000 people. We're not as up-to-date with complex machine learning techniques as we should be, either, but the main reason DeepFace is so accurate is its method of "frontalization" -- or, creating a front-facing portrait from a more dynamic source image.

  • Google uses computer vision and machine learning to index your photos

    by 
    Terrence O'Brien
    Terrence O'Brien
    05.23.2013

    Tags are so 2008. Google doesn't want you to waste time tagging your photos, except for the people in them. The web giant wants to be able to recognize more abstract concepts like "sunset" or "beach" automatically and attach that metadata without further input. In yet another post-I/O update, Google+ photos now uses computer vision and machine learning to identify objects and settings in your uploaded snapshots. You can simply search for "my photos of trees" or "Tim's photos of bikes" and get surprisingly accurate results, with nary a manually added tag in sight. You can perform the searches in Google+, obviously, but you can also execute your query from the standard Google search page. It's pretty neat, but sadly Mountain View seems to have forgotten what cats look like.

  • Google and NASA team up for D-Wave-powered Quantum Artificial Intelligence Lab

    by 
    Terrence O'Brien
    Terrence O'Brien
    05.16.2013

    Google. NASA. Quantum computers. Seriously, everything about the new Quantum Artificial Intelligence Lab at the Ames Research Center is exciting. The joint effort between Mountain View and America's space agency will put a 512 qubit machine from D-Wave at the disposal of researchers from around the globe, with the USRA (Universities Space Research Association) inviting teams of scientists and engineers to share time on the unique super computer. The goal is to study how quantum computing might be leveraged to advance machine learning, a branch of AI that has proven crucial to Google's success. The internet giant has already done some work with quantum computing before, now the goal is to see if its experimentation can translate into real world results. The idea, for Google at least, is to combine the extreme (but highly-specialized) power of the quantum bit with its oceans of traditional data centers to build more accurate models for everything from speech recognition to web search. And maybe, just maybe, with the help of quantum computers your phone will finally realize you didn't mean to say "duck."

  • Carnegie Mellon researchers develop robot that takes inventory, helps you find aisle four

    by 
    Alexis Santos
    Alexis Santos
    06.30.2012

    Fed up with wandering through supermarket aisles in an effort to cross that last item off your shopping list? Researchers at Carnegie Mellon University's Intel Science and Technology Center in Embedded Computing have developed a robot that could ease your pain and help store owners keep items in stock. Dubbed AndyVision, the bot is equipped with a Kinect sensor, image processing and machine learning algorithms, 2D and 3D images of products and a floor plan of the shop in question. As the mechanized worker roams around, it determines if items are low or out of stock and if they've been incorrectly shelved. Employees then receive the data on iPads and a public display updates an interactive map with product information for shoppers to peruse. The automaton is currently meandering through CMU's campus store, but it's expected to wheel out to a few local retailers for testing sometime next year. Head past the break to catch a video of the automated inventory clerk at work.

  • Intel research hopes to give computers human smarts, appreciate our idiosyncrasies

    by 
    James Trew
    James Trew
    05.24.2012

    Intel's chief technology officer, Justin Rattner, doesn't own a smartphone. Well, not by his definition anyway. Talking in Tel Aviv, Rattner was evangelizing about the opportunities in machine learning, and outlining the goals of the firm's Collaborative Research Institute for Computational Intelligence. Working with Technion and the Hebrew University of Jerusalem, Intel plans to develop small, wearable computers that learn our behavioral patterns -- like where we left our keys -- and other things today's "smart" phones could never do. Intel's Israeli president, Mooly Eden, went on to claim that within five years, all five senses will be computerized, and in a decade, transistors per chip will outnumber neurons in the human brain. All that tech to stop you locking yourself out.

  • Tenacious robot ashamed of creator's performance, shows mankind how it's done (video)

    by 
    Sean Buckley
    Sean Buckley
    05.19.2011

    Looks like researchers have made another step towards taking Skynet live: giving robots the groundwork for gloating. A Swiss team of misguided geniuses have developed learning algorithms that allow robot-kind to learn from human mistakes. Earthlings guide the robot through a flawed attempt at completing a task, such as catapulting a ball into a paper basket; the machine then extrapolates its goal, what went wrong in the human-guided example, and how to succeed, via trial and error. Rather than presuming human demonstrations represent a job well done, this new algorithm assumes all human examples are failures, ultimately using their bad examples to help the 'bot one-up its creators. Thankfully, the new algorithm is only being used with a single hyper-learning appendage; heaven forbid it should ever learn how to use the robot-internet.

  • Schizophrenic computer may help us understand similarly afflicted humans

    by 
    Sean Buckley
    Sean Buckley
    05.11.2011

    Although we usually prefer our computers to be perfect, logical, and psychologically fit, sometimes there's more to be learned from a schizophrenic one. A University of Texas experiment has doomed a computer with dementia praecox, saddling the silicon soul with symptoms that normally only afflict humans. By telling the machine's neural network to treat everything it learned as extremely important, the team hopes to aid clinical research in understanding the schizophrenic brain -- following a popular theory that suggests afflicted patients lose the ability to forget or ignore frivolous information, causing them to make illogical connections and paranoid jumps in reason. Sure enough, the machine lost it, and started spinning wild, delusional stories, eventually claiming responsibility for a terrorist attack. Yikes. We aren't hastening the robot apocalypse if we're programming machines to go mad intentionally, right?

  • Fujitsu's HOAP-2 robot wipes whiteboards clean -- humankind next in line? (video)

    by 
    Christopher Trout
    Christopher Trout
    02.18.2011

    They've taught them how to flip pancakes and shoot arrows, and now they're teaching humanoids to erase your whiteboard. That's right, the same folks who brought you iCub in a feathery headdress are back at it with Fujitsu's HOAP-2, a humanoid robot that looks like it's related to the Jetsons' maid, and can wipe a dry erase board clean via upper-body kinesthetic learning. While scientists force the robot's arm through a number of erasing movements, an attached force-torque sensor records the patterns, allowing HOAP-2 to mimic its previous actions, and voilà! You've got a blank slate. Sure, this little guy looks perfectly harmless in comparison with the bow-and-arrow-wielding iCub, but replace that eraser with a switchblade and the human race is in a whole world of hurt.

  • Robot Archer iCub learns to shoot arrows, pierces our mortal heart (video)

    by 
    Sean Hollister
    Sean Hollister
    09.25.2010

    How do you make a creepy baby robot downright cute? Give it an Indian headdress and teach it the bow-and-arrow, of course. The same team of researchers who brought us the pancake-flipping robot arm have imbued this iCub with a learning algorithm that lets it teach itself archery much the same as a human might do, by watching where the suction-tipped arrow lands and adjusting its aim for each subsequent shot. In this case, it obtained a perfect bullseye after just eight attempts. Watch it for yourself after the break, and ponder the fate of man -- how can we possibly stop an uprising of adorable robots that never miss?

  • DARPA sets sights on cameras that understand

    by 
    Sean Hollister
    Sean Hollister
    03.18.2010

    DARPA wants to let you all know that its plans for the robot apocalypse are still going strong. The agency's got IBM working on the brains, has an RFI out on the skin, and is handling propulsion and motor control in-house. Next up? Eyeballs. In order to give its robots the same sort of "visual intelligence" currently limited to animals, DARPA is kicking off a new program called The Mind's Eye with a one-day scientific conference this April. The goal is a "smart camera" that can not only recognize objects, but also be able to describe what they're doing and why, allowing unmanned bots and surveillance systems to report back, or -- we're extrapolating here -- make tactical decisions of their own. To be clear, there's no funding or formal proposal requests for this project quite yet. But if the code does come to fruition, DARPA, please: make sure autoexec.bat includes a few Prime Directives.