artificialintelligence

Latest

  • NELL machine learning system could easily beat you at Trivial Pursuit

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    10.12.2010

    If fifteen years ago you would have told us that some day, deep in the bowels of Carnegie Mellon University, a supercomputer cluster would scan hundreds of millions of Web pages, examine text patterns, and teach itself about the Ramones, we might have believed you -- we were into some far-out stuff back then. But this project is about more than the make of Johnny's guitar (Mosrite) or the name of the original drummer (Tommy). NELL, or Never-Ending Language Learning system, constantly surfs the Web and classifies everything it scans into specific categories (such as cities, universities, and musicians) and relations. One example The New York Times cites: Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts - even if it has never read that Mr. Manning plays for the Colts. But sports and music factoids aside, the system is not without its flaws. For instance, when Internet cookies were categorized as baked goods, "[i]t started this whole avalanche of mistakes," according to researcher Tom M. Mitchell. Apparently, NELL soon "learned" that one could delete pastries (the mere thought of which is sure to give us night terrors for quite some time). Luckily, human operators stepped in and corrected the thing, and now it's back on course, accumulating data and giving researchers insights that might someday lead to a true semantic web.

  • Google and TU Braunschweig independently develop self-driving cars (video)

    by 
    Sean Hollister
    Sean Hollister
    10.09.2010

    There's a Toyota Prius in California, and a VW Passat halfway around the globe -- each equipped with bucket-shaped contraptions that let the cars drive themselves. Following their research on autonomous autos in the DARPA Urban Challenge, a team at Germany's TU Braunschweig let the above GPS, laser and sensor-guided Volkswagen wander down the streets of Brunswick unassisted late last week, and today Google revealed that it's secretly tested seven similar vehicles by the folks who won that same competition. CMU and Stanford engineers have designed a programmable package that can drive at the speed limit on regular streets and merge into highway traffic, stop at red lights and stop signs and automatically react to hazards -- much like the German vehicle -- except Google says its seven autos have already gone 1,000 unassisted miles. That's still a drop in the bucket, of course, compared to the efforts it will take to bring the technology home -- Google estimates self-driving vehicles are at least eight years down the road. Watch the TU Braunschweig vehicle in action after the break. Update: Though Google's cars have driven 1,000 miles fully autonomously, that's a small fraction of the time they've spent steering for themselves. We've learned the vehicles have gone 140,000 miles with occasional human interventions, which were often a matter of procedure rather than a compelling need for their human drivers to take control. [Thanks, el3ktro]

  • Swiss researchers show off brain-controlled, AI-augmented wheelchair

    by 
    Donald Melanson
    Donald Melanson
    09.07.2010

    They're far from the first to try their hand at a brain-controlled wheelchair, but some researchers at the École polytechnique fédérale de Lausanne (or EPFL) in Switzerland seem to have pulled off a few new tricks with their latest project. Like some similar systems, this one relies on EEG readings to detect specific brain patterns, but it backs that up with some artificial intelligence that the researchers say allows for "shared control" of the wheelchair. That latter component is aided by a pair of cameras and some image processing software that allows the wheelchair to avoid obstacles, but it doesn't stop there -- the software is also able to distinguish between different types of objects. According to the researchers, that could let it go around a cabinet but pull up underneath a desk, for instance, or potentially even recognize the person's own desk and avoid others. Head on past the break to check it out in action.

  • William Gibson: We never imagined that AI would be like this

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    09.02.2010

    William Gibson, primary philosopher and poet laureate of cyberspace, had an op-ed published in The New York Times recently, where he writes of a force almost beyond comprehension, "a central and evolving structural unit not only of the architecture of cyberspace, but of the world." This is artificial intelligence unlike any we have seen, an organ of global human perception in which we are both the surveilled and the surveillant, what is in essence "a post-geographical, post-national super-state." And what is this force called? That's right: Google.

  • Driverless vans set off on intercontinental trek from Italy to China (video)

    by 
    Vlad Savov
    Vlad Savov
    07.23.2010

    You might not have expected the future to look like your granddad's groovy camper van, but take a closer look here and you'll find that this is indeed nothing like your forefather's people carrier. The VisLab team from the University of Parma have taken a fleet of Piaggio Porter Electric vehicles, strapped them with an array of cameras, lasers and other sensors, and topped them off with solar panels to keep the electronics powered. Oh, and lest we forgot to mention: the vans are (mostly) autonomous. VIAC (or VisLab Intercontinental Autonomous Challenge) is the grand name given to their big demonstration: an 8,000-mile, 3-month tour that will ultimately find them arriving in Shanghai, China, having set off from Milan this Tuesday. You can follow the day-by-day development on the blog below, though we're still being told that practical driverless road cars are a measure of decades, not years, away.

  • Muon the humanoid robot is our ideal best friend

    by 
    Laura June Dziuban
    Laura June Dziuban
    06.19.2010

    We don't speak German, and machine translation continues to be an intermittent and annoying bundle of failure, so bear with us on this one as we try to cobble together what exactly is going on here. This is Muon, the humanoid robot who is apparently being developed in Berlin by Frackenpohl Poulheim at the ALEAR Laboratory of Neuro Robotics at the Humboldt University in Berlin. Like other humanoid bots, Muon is about the size of an eight year old child so as not to creep out his human companions by being too threatening, and his design, while reminiscent of previous robots we've seen, is pretty original. It's actually hard to tell what stage of development Muon is in -- certainly many of the photos we have spied were concepts -- but we're going to keep our eyes peeled for him moving into the future. If you hit up the source link, you can check out a video of Muon's development. There's one more amazing shot after the break.

  • IBM's Watson is really smart, will try to prove it on Jeopardy! this fall (video)

    by 
    Vlad Savov
    Vlad Savov
    06.17.2010

    As much as we love our Google homepage, computer search remains a pretty rudimentary affair. You punch in keywords and you get only indirect answers in the form of relevant web results. IBM doesn't seem to be too happy with this situation and has been working for the past three years on perfecting its Watson supercomputer: an array of server racks that's been endowed with linguistic algorithms allowing it to not only recognize oddly phrased or implicative questions, but to answer them in kind, with direct and accurate responses. Stuffed with encyclopedic knowledge of the world around it, it answers on the basis of information stored within its data banks, though obviously you won't be able to tap into it any time soon for help with your homework. The latest word is that Watson's lab tests have impressed the producers of Jeopardy! enough to have the bot participate in a televised episode of the show. That could happen as early as this fall, which fits right in line with our scheduled doom at robots' hands by the end of 2012. Ah well, might as well get our popcorn and enjoy the show.

  • The Engadget Show returns this Saturday, April 24th with roboticist Dr. Dennis Hong, Ryan Block, and much more!

    by 
    Joshua Topolsky
    Joshua Topolsky
    04.23.2010

    Well ladies and gentlemen, it's that time again -- the Engadget Show is back in a big way this Saturday, April 24th at 6pm! This time around, we'll have the world renowned roboticist Dr. Dennis Hong on hand for a stirring discussion on robotics -- as well as the progress on our future robot butlers. What's more, gdgt co-founder and Engadget editor emeritus Ryan Block will be joining the round table and our own investigative reporter Rick Karr will be back with a head-scratching report on the war in the music industry over net neutrality. You can also look forward to some fine, fine music from Neil Voss and mind-numbing visuals from NO CARRIER. We'll be streaming the whole thing direct to you via the internet, but we'll also be doing tons of giveaways at the live show only, so make the trek and join us at The Times Center in person. If you're geographically incapable of joining us in New York City, just hit up the stream and tweet comments directly to the show! If you're wondering about what kind of giveaways we've got in store, one lucky audience member will walk away from the show with this insane ATI Eyefinity rig. Yes. Seriously. Note: The show time has been moved back an hour, so it will be starting at 6PM! See below for more details. The Engadget Show is sponsored by Sprint, and will take place at the Times Center, part of The New York Times Building in the heart of New York City at 41st St. between 7th and 8th Avenues (see map after the break). Tickets are -- as always -- free to anyone who would like to attend, but seating is limited, and tickets will be first come, first served... so get there early! Here's all the info you need: There is no admission fee -- tickets are completely free The event is all ages Ticketing will begin at the Times Center at 3:30PM on Saturday, doors will open for seating at 5:30PM, and the show begins at 6PM You cannot collect tickets for friends or family -- anyone who would like to come must be present to get a ticket Seating capacity in the Times Center is about 340, and once we're full, we're full The venue is located at 41st St. between 7th and 8th Avenues in New York City (map after the break) The show length is around an hour If you're a member of the media who wishes to attend, please contact us at: engadgetshowmedia [at] engadget [dot] com, and we'll try to accommodate you. All other non-media questions can be sent to: engadgetshow [at] engadget [dot] com. Subscribe to the Show: [iTunes] Subscribe to the Show directly in iTunes (M4V). [Zune] Subscribe to the Show directly in the Zune Marketplace (M4V). [RSS M4V] Add the Engadget Show feed (M4V) to your RSS aggregator and have it delivered automatically.

  • AILA bot can recognize objects' weight and fragility, render shelf stackers obsolete (video)

    by 
    Vlad Savov
    Vlad Savov
    04.22.2010

    Now, this isn't quite the height of innovation, but it's a pretty cool compilation of existing technologies nonetheless. The femme-themed AILA robot has an RFID reader in its left palm, which allows it to obtain non-visual information about the objects put in front of it. Based on that input, as well as data collected from its 3D camera and two laser scanners, AILA can intelligently deal with and transport all sorts of items, without the pesky need for a fleshy human to come along and give it further instructions. The good news is that it's a really slow mover for now, so if you do your cardio you should be able to run away from one in case of any instruction set malfunctions. See it on video after the break.

  • Former Apple Store employee creates Iron Man's J.A.R.V.I.S. using a Mac mini

    by 
    Michael Grothaus
    Michael Grothaus
    04.06.2010

    Okay, there's no HUD display like Tony Stark had and it isn't voiced by Paul Bettany, but former Apple Store employee Chad Barraford has created Project Jarvis, a digital assistant that greets him, Tweets for him, and can even tell his family when he has a headache and dim the lights of his apartment before he reaches home. Project Jarvis is based on the comic book character Edwin Jarvis, Tony Stark's human butler who became an AI construct after he was reinvisioned for a twenty-first century audience in the first Iron Man film. Chad's real life Jarvis may not help him fly an invisible suit of armor, but via RFID tags, webcams, and microphones, Barraford can communicate with Jarvis in a number of ways including tweeting, instant messaging, and speech recognition which allows him to control lights and appliances, notify him of breaking news, Facebook updates, Netflix queues, check stock quotes and weather, and even help assist him with cooking. Barraford calls Jarvis a digital life assistant (DLA) and runs it entirely from a four year-old Mac mini running custom AppleScript, he told us. Right now he has no plans to sell the AppleScript code, but is always happy to share ideas with other developers of DLAs. Click on over to The Boston Globe to see video of Jarvis in action.

  • DARPA sets sights on cameras that understand

    by 
    Sean Hollister
    Sean Hollister
    03.18.2010

    DARPA wants to let you all know that its plans for the robot apocalypse are still going strong. The agency's got IBM working on the brains, has an RFI out on the skin, and is handling propulsion and motor control in-house. Next up? Eyeballs. In order to give its robots the same sort of "visual intelligence" currently limited to animals, DARPA is kicking off a new program called The Mind's Eye with a one-day scientific conference this April. The goal is a "smart camera" that can not only recognize objects, but also be able to describe what they're doing and why, allowing unmanned bots and surveillance systems to report back, or -- we're extrapolating here -- make tactical decisions of their own. To be clear, there's no funding or formal proposal requests for this project quite yet. But if the code does come to fruition, DARPA, please: make sure autoexec.bat includes a few Prime Directives.

  • IBM simulates cat's brain, humans are next

    by 
    Vlad Savov
    Vlad Savov
    11.18.2009

    Almost exactly a year ago we noted DARPA pouring nearly $5 million into an IBM project to develop a computer capable of emulating the brain of a living creature. Having already modeled half of a mouse's brain, the researchers were at that time heading toward the more ambitious territory of feline intelligence, and today we can report on how far that cash injection and extra twelve months have gotten us. The first big announcement is that they have indeed succeeded in producing a computer simulation on par, in terms of complexity and scale, with a cat's brain. The second, perhaps more important, is that "jaw-dropping" progress has been made in the sophistication and detail level of human brain mapping. The reverse engineering of the brain is hoped to bring about new ways for building computers that mimic natural brain structures, an endeavor collectively termed as "cognitive computing." Read link will reveal more, and you can make your own cyborg jokes in the comments below.

  • Robovie rescue bot hunts high and low for lost princesses (video)

    by 
    Vlad Savov
    Vlad Savov
    11.03.2009

    If you've been feeling blue because you haven't got enough green to keep the old bank account in the black, we've got just the tonic for you, dear friend. There's nothing that gets us all perked up and cheerful quite like an adorable humanoid robot negotiating an obstacle course in the performance of a rescue mission. In fact, if you layer on your own "save the princess" narrative atop the on-screen events, the pep in your step should be back in no time. The smile-inducing video can be found after the break.

  • MIT's Affective Intelligent Driving Agent is KITT and Clippy's lovechild (video)

    by 
    Vlad Savov
    Vlad Savov
    10.30.2009

    If we've said it once, we've said it a thousand times, stop trying to make robots into "friendly companions!" MIT must have some hubris stuck in its ears, as its labs are back at it with what looks like Clippy gone 3D, with an extra dash of Knight Rider-inspired personality. What we're talking about here is a dashboard-mounted AI system that collects environmental data, such as local events, traffic and gas stations, and combines it with a careful analysis of your driving habits and style to make helpful suggestions and note points of interest. By careful analysis we mean it snoops on your every move, and by helpful suggestions we mean it probably nags you to death (its own death). Then again, the thing's been designed to communicate with those big Audi eyes, making even our hardened hearts warm just a little. Video after the break. %Gallery-76874%

  • eviGroup's Pad is a 10-inch 3G tablet with personality

    by 
    Vlad Savov
    Vlad Savov
    10.26.2009

    Time to freshen up the old netbook market with a dash of Windows 7, a pinch of touchscreen functionality, and a generous helping of... Seline10? eviGroup, the crew responsible for the attractive 5-inch Wallet MID, has announced the 10.2-inch Pad, whose pièce de résistance is the Seline10 artificial intelligence software that's been in development for a decade, if you can believe it. Its purpose is to act as your secretary / assistant, and while the novelty's good, we all know how well Clippy worked out. Fret not though, it's just an optional extra and shouldn't detract from the appeal of a device that offers 3G and a/b/g WiFi connectivity, one VGA and three USB ports, multicard reader, webcam, microphone, and the old faithful 1.6GHz of Atom power. A price of under €500 is being touted, with further details set to emerge over the coming days.

  • Movie Gadget Friday: Weird Science

    by 
    Ariel Waldman
    Ariel Waldman
    08.28.2009

    Ariel Waldman contributes Movie Gadget Friday, where she highlights the lovable and lame gadgets from the world of cinema. We last left off on the cyberpunk streets of LA in Strange Days. This week, in honor of the loss of the man behind so many 1980's icons, Movie Gadget Friday is paying homage to filmmaker John Hughes with a look into the 1985 cult-classic Weird Science. Tapping into the geek-fiction fantasies of most tinkering teenagers, real-life gadget specs are stretched to surreal capabilities to create the ultimate female bombshell. It's without surprise that the character's name, Lisa, was inspired by the Apple Lisa, Apple's first GUI computer.

  • Are memristors the future of Artifical Intelligence? DARPA thinks so

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    07.14.2009

    New Scientist has recently published an article that discusses the memristor, the long theorized basic circuit element that can generate voltage from a current (like a resistor), but in a more complex, dynamic manner -- with the ability to "remember" previous currents. As we've seen, HP has already made progress developing hybrid memristor-transistor chips, but now the hubbub is the technology's applications for artificial intelligence. Apparently, synapses have complex electrical responses "maddeningly similar" to those of memristors, a realization that led Leon Chua (who first discovered the memristor in 1971) to say that synapses are memristors, "the missing circuit element I was looking for" was with us all along, it seems. And of course, it didn't take long for DARPA to jump into the fray, with our fave DoD outfit recently announcing its Systems of Neuromorphic Adaptive Plastic Scalable Electronics Program (SyNAPSE -- cute, huh?) with the goal of developing "biological neural systems" that can "autonomously process information in complex environments by automatically learning relevant and probabilistically stable features and associations." In other words, they see this as a way to make their killer robots a helluva lot smarter -- and you know what that means, don't you?Read - New Scientist: "Memristor minds: The future of artificial intelligence"Read - DARPA: "Systems of Neuromorphic Adaptive Plastic Scalable Electronics"

  • Researchers develop a robot that reads your intentions, says you're 'thick'

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    06.06.2009

    Robots won't be able to wrest control of the planet from us silly humans until they learn how to collaborate. Sure, they can mow the lawn or mix a drink, but only when you give 'em explicit instructions. Luckily for our future robot overlords, The EU's JAST project is studying the ways that humans work together, in the hope that it can someday teach robots to anticipate the actions and intentions of a human partner. "In our experiments the robot is not observing to learn a task," explains Wolfram Erlhagen from the University of Minho. "The JAST robots already know the task, but they observe behavior, map it against the task, and quickly learn to anticipate [partner actions] or spot errors when the partner does not follow the correct or expected procedure." This bad boy has a neural architecture that mimics what happens when two people interact, and the video below shows the rather melancholy automaton trying to convince his human partner to pick up the right pieces to complete a simple task. Watch it in action after the break.

  • IBM's Watson to rival humans in round of Jeopardy!

    by 
    Darren Murph
    Darren Murph
    04.27.2009

    IBM's already proven that a computer from its labs can take on the world's best at chess, but what'll happen when the boundaries of a square-filled board are removed? Researchers at the outfit are obviously excited to find out, today revealing that its Watson system will be pitted against brilliant Earthlings on Jeopardy! in an attempt to further artificial intelligence when it comes to semantics and searching for indexed information. Essentially, the machine will have to be remarkably labile in order to understand "analogies, puns, double entendres and relationships like size and location," something that robotic linguists have long struggled with. There's no mention of a solid date when it comes to the competition itself, but you can bet we'll be setting our DVRs whenever it's announced. Check out a video of the progress after the break.[Via The New York Times]

  • Artificial Intelligence solves boring science experiments, makes interns obsolete

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    04.03.2009

    Researchers at Aberystwyth University in Wales have developed a robot that is being heralded as the first machine to have discovered new scientific knowledge independently of a human operator. Named Adam, the device has already identified the role of several genes in yeast cells, and has the ability to plan further experiments to test its own hypotheses. Ross King, from the university's computer science department, remarked that the robot is meant to take care of the tedious aspects of the scientific method, freeing up human scientists for "more advanced experiments." Across the pond at Cornell, researchers have developed a computer that can find established laws in the natural world -- without any prior scientific knowledge. According to PhysOrg, they've tested the AI on "simple mechanical systems" and plan on applying it to more complex problems in areas such as biology to cosmology where there are mountains of data to be poured through. It sure is nice to hear about robots doing something helpful for a change.[Thanks, bo3of]Read: Robo-scientist's first findingsRead: Being Isaac Newton: Computer derives natural laws from raw data