turingtest

Latest

  • Bulkhead Interactive

    Try passing 'The Turing Test' August 30th on Xbox One

    by 
    Timothy J. Seppala
    Timothy J. Seppala
    07.22.2016

    Given video gaming's reliance on artificial intelligence and penchant for sci-fi themes, it's surprising that only now there's a game named after Alan Turing's famous A.I. test. Here we are though, with The Turing Test hitting Xbox One on August 30th. It's also been promised for Steam next month. A post on Xbox Wire makes the game sound an awful lot like Portal, to be honest. It's a first-person puzzler set in a sterile research facility on Jupiter's moon Europa wherein you'll use a gun of sorts to control A.I.-powered machines and "solve puzzles that only a human could solve." That's in addition to other tasks designed to bend your brain.

  • Machines can generate sound effects that fool humans

    by 
    Steve Dent
    Steve Dent
    06.13.2016

    Can machines come up with plausible sounds effects for video? Recently, MIT's artificial intelligence (CSAIL) lab created a sort of Turing test that fooled folks into thinking that machine-created letters were written by humans. Using the same principal, the researchers created algorithms that act just like Hollywood "Foley artists," adding sound to silent video. In a psychological test, it fooled subjects into believing that the computer-generated banging, scratching and rustling was recorded live.

  • Illustration by D. Thomas Magee

    The curious sext lives of chatbots

    by 
    Christopher Trout
    Christopher Trout
    03.02.2016

    ELIZA is old enough to be my mother, but that didn't stop me from trying to have sex with her. NSFW Warning: This story may contain links to and descriptions or images of explicit sexual acts.

  • 'Ex Machina' shows Turing isn't enough to test AI

    by 
    Devindra Hardawar
    Devindra Hardawar
    04.10.2015

    With Ex Machina, the directorial debut of 28 Days Later and Sunshine writer Alex Garland, we can finally put the Turing test to rest. You've likely heard of it -- developed by legendary computer scientist Alan Turing (recently featured in The Imitation Game), it's a test meant to prove artificial intelligence in machines. But, given just how easy it is to trick, as well as the existence of more rigorous alternatives for proving consciousness, passing a test developed in the '50s isn't much of a feat to AI researchers today. Ex Machina isn't the first film to expose the limits of the Turing test, but it's by far one of the most successful. And, like the films 2001 and Primer, it's a work of science fiction that might end up giving you a case of philosophical whiplash.

  • Revamped Turing test expects computers to show imagination

    by 
    Mariella Moon
    Mariella Moon
    11.20.2014

    In June, the developers of a Russian chatbot posing as a 13-year-old boy from Ukraine claimed it had passed the Turing test. While a lot of people doubt the result's validity because the testers used a sketchy methodology and the event was organized by a man fond of making wild claims, it's clear we need a better way to determine if an AI possesses human levels of intelligence. Enter Lovelace 2.0, a test proposed by Georgia Tech associate professor Mark Riedl. Here's how Lovelace 2.0 works: For the test, the artificial agent passes if it develops a creative artifact from a subset of artistic genres deemed to require human-level intelligence and the artifact meets certain creative constraints given by a human evaluator. Further, the human evaluator must determine that the object is a valid representative of the creative subset and that it meets the criteria. The created artifact needs only meet these criteria but does not need to have any aesthetic value. Finally, a human referee must determine that the combination of the subset and criteria is not an impossible standard.

  • Supercomputer passes the Turing test by mimicking a teenager (update: reasons to be cautious)

    by 
    Jon Fingas
    Jon Fingas
    06.08.2014

    After 64 long years, it looks like a machine has finally passed the Turing test for artificial intelligence. A supercomputer in a chat-based challenge fooled 33 percent of judges into thinking that it was Eugene Goostman, a fictional 13 year old boy; that's just above the commonly accepted Turing test's 30 percent threshold. Developers Vladimir Veselov and Eugene Demchenko say that the key ingredients were both a plausible personality (a teen who thinks he knows more than he does) and a dialog system adept at handling more than direct questions.

  • Bots 'out-human' humans in competition

    by 
    MJ Guthrie
    MJ Guthrie
    09.27.2012

    Who's a bot and who is not? Careful with your guess; you might just have it backwards! Alan Turing, a mathematician and computer scientist born 100 years ago, argued that the best measure of sentience in a machine is whether it can fool us into believing it is human. And in a recent gaming tournament pitting AI bots against humans, two artificially created virtual gamers did just that. Set in Unreal Tournament 2004, the competition gave players "judging guns" to tag which competitors they thought were human and which they thought were bots. Two bots created by scientists convinced the judges that were more human than half of the human competitors. In fact, both bots mimicked human behaviors well enough to receive a humanness rating of 52%, whereas the human players in the tournament averaged only 40%. [Thanks to Matt for the tip!]

  • Google's Turing doodle celebrates his genius, reminds us how dumb we are (video)

    by 
    James Trew
    James Trew
    06.23.2012

    This week sees many corners of the globe celebrating the 100th anniversary of the birth of Alan Turing. A man whose contribution to the worlds of tech and gadgets is immeasurable -- a sentiment not lost on Google. Today, geeks and norms worldwide will be waking up to possibly the most complex doodle to date. Can you set the machine and spell out "Google"? If you can, you'll be sent off to lots more information about the man himself. This isn't the only thing Mountain View's done to keep his legacy alive, having previously helped Bletchley Park raise funds to purchase (and display) Turing's papers, and more recently helping curators at London's Science Museum with its Codebreaker - Alan Turing's Life and Legacy exhibition. If you haven't already, head to Google.com and pop your logic hat on, and if you get stuck, head past the break for a helpful video.

  • Remembering Alan Turing at 100

    by 
    Brian Heater
    Brian Heater
    06.22.2012

    Alan Turing would have turned 100 this week, an event that would have, no doubt, been greeted with all manner of pomp -- the centennial of a man whose mid-century concepts would set the stage for modern computing. Turing, of course, never made it that far, found dead at age 41 from cyanide poisoning, possibly self-inflicted. His story is that of a brilliant mind cut down in its prime for sad and ultimately baffling reasons, a man who accomplished so much in a short time and almost certainly would have had far more to give, if not for a society that couldn't accept him for who he was. The London-born computing pioneer's name is probably most immediately recognized in the form of the Turing Machine, the "automatic machine" he discussed in a 1936 paper and formally extrapolated over the years. The concept would help lay the foundation for future computer science, arguing that a simple machine, given enough tape (or, perhaps more appropriately in the modern sense, storage) could be used to solve complex equations. All that was needed as Turing laid it out, was a writing method, a way of manipulating what's written and a really long ream to write on. In order to increase the complexity, only the storage, not the machine, needs upgrading.

  • IBM celebrates the 15th anniversary of Deep Blue beating Garry Kasparov (video)

    by 
    Daniel Cooper
    Daniel Cooper
    05.11.2012

    It's been 15 years since IBM's Deep Blue recorded its famous May 11th 1997 victory over world champion chess player Garry Kasparov -- a landmark in artificial intelligence. Designed by Big Blue as a way of understanding high-power parallel processing, the "brute force" system could examine 200 million chess positions every second, beating the grandmaster 3.5-2.5 after losing 4-2 the previous year. It went on to help develop drug treatments, analyze risk and aid data miners before being replaced with Blue Gene and, more recently, Watson -- which recorded a famous series of victories on Jeopardy! in 2011. If you'd like to know more, we've got a video with one of the computer's fathers: Dr. Murray Campbell and a comparison on how the three supercomputers stack up after the break. As for Garry Kasparov? The loss didn't ruin his career, he went on to win every single Chess trophy conceived, retired, wrote some books and went into politics. As you do.

  • Google reCAPTCHAs now featuring Street View addresses, 221b Baker St. to get even more famous

    by 
    Tim Stevens
    Tim Stevens
    03.30.2012

    If you've enjoyed decrypting the often frustratingly skewed (and occasionally humorously juxtaposed) reCAPTCHAs, you might be a bit sad to learn that Google is mixing things up with some rather more boring numerals. The combinations of two words are typically used as part of a registration form to ensure the registrant is, indeed, human. Google is now replacing one of the words in some of its reCAPTCHA forms with photos gleaned from Street View service. Google says it uses these numbers internally to improve the accuracy of Street View and that pulling them into reCAPTCHAs is part of an "experiment" to "determine if using imagery might also be an effective way to further refine our tools for fighting machine and bot-related abuse online."In other words, Google's bots are already capable of decoding these numbers, which makes this all sound like a bit of a challenge to the rest of the OCR-loving coders in the world. Any takers?[Image Credit: dirtbag]

  • A closer look at Elbot's Turing test conversation

    by 
    Darren Murph
    Darren Murph
    10.19.2008

    Earlier this week, Elbot made a fairly impressive showing (comparatively speaking, at least) when fooling three judges into thinking it was human; had it fooled one more on the dozen deep panel, it would have successful passed the famed Turing test. Auntie Beeb now has a report on what exactly Elbot said when asked a litany of questions away from the competition, and there's also a video with the related experts dissecting its performance. To be totally honest, its responses weren't too far from being completely passable as ones from a tired, potentially inebriated Earthling (in our humble opinion), but we'll leave the final determination to you. Touch the read link for a one-on-one with ones and zeros.

  • New round of Turing test fails to crown a winner

    by 
    Donald Melanson
    Donald Melanson
    10.13.2008

    While some folks are considering taking the Turing test one step further and applying it to military robots, a group of researchers in the UK led by none other than would-be cyborg Kevin Warwick are doing their best to keep things as Turing intended and simply trying to fool some humans into thinking that the robot they're taking to is actually a person. Fortunately for us on the human side of the equation, they weren't quite successful, though one "robot" known as Elbot did get relatively close to the goal, fooling 25% of its human interrogators, which is just 5% off the mark set by Alan Turing. Each of the four other "artificial conversational entities" also managed to fool at least one of their questioners, though they eventually showed their true colors with random answers like "soup" when pressed as to what their job was.

  • Military Turing test to make autonomous war robots legal?

    by 
    Darren Murph
    Darren Murph
    02.29.2008

    Not that we're experts on the matter or anything, but if barrister and engineer Chris Elliot knows a thing or two about legal issues, a kind of "military Turing test" could be the key to legally deploying autonomous systems in battle. As it stands, "weapons intrinsically incapable of distinguishing between civilian and military targets are illegal" -- at least according to Mr. Elliot -- but by testing an intelligent war machine's ability to hone in on legitimate targets and brush off friendlies, all that could change. Of course, actually administering the test still remains a mystery, but considering that remotely controlled armed bots are currently being used in Iraq, we reckon someone's already figuring out a solution to said dilemma.