RobotApocalypse

Latest

  • Pyuuun palm-sized robot keeps tabs on you, delivers beverages

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    03.02.2009

    If Hans Moravec of the Robotics Institute at Carnegie Mellon University is right, we only have a good twenty to thirty years left before robots evolve into a new type of artificial species. As we wait for the inevitable robot apocalypse, we've already begun to see lots of little robotic guys pop into our lives, whether they're sweeping the floor, giving us something to hug, or bringing us a cup of tea. In addition to its miniature waitstaff ability, Pyuuun, Robo-Engine's "LifeLog Robot," is equipped with eight sensors (including brightness, movement, collision, sound, distance, temperature, slope and infrared) and can be programmed to monitor an area, collecting various data (such as keeping an eye on a temperature-sensitive workspace) and reporting back to you (or your robot overlords) via WiFi. With a 12-volt battery that promises six hours of use on a single charge, the utility of this bad boy is only limited by your imagination -- and its ?300,000 (about $3,090) price tag. Video after the break.

  • Navy report warns of robot uprising, suggests a strong moral compass

    by 
    Joseph L. Flatley
    Joseph L. Flatley
    02.18.2009

    You know, when armchair futurists (and jive talkin' bloggists) make note of some of the scary new tech making the rounds in defense circles these days it's one thing, but when the Doomsday Scenarios come from official channels, that's when we start to get nervous. According to a report published by the California State Polytechnic University (with data made available by the U.S. Navy's Office of Naval Research) the sheer scope of the military's various AI projects is so vast that it is impossible for anyone to fully understand exactly what's going on. "With hundreds of programmers working on millions of lines of code for a single war robot," says Patrick Lin, the chief compiler of the report, "no one has a clear understanding of what's going on, at a small scale, across the entire code base." And what we don't understand can eventually hunt us down and kill us. This isn't idle talk, either -- a software malfunction just last year caused US. Army robots to aim at friendly targets (fortunately, no shots were fired). The solution, Dr. Lin continues, is to teach robots "battlefield ethics... a warrior code." Of course, the government has had absolutely no problems with ethics over the years -- so programming its killer robots with some rudimentary values should prove relatively simple.