uncanny-valley
Latest
NVIDIA helps bring more lifelike avatars to chatbots and games
NVIDIA has unveiled tools that could see realistic AI avatars come to more apps and games.
NVIDIA created a toy replica of its CEO to demo its new AI avatars
NVIDIA has advanced its AI voice and avatar technology in uncanny ways. You could talk to assistants that look like real people — or toys.
Disney robot with human-like gaze is equal parts uncanny and horrifying
Disney researchers have built a robot with a highly realistic gaze -- and a horrifying look that could haunt your dreams.
Samsung sheds light on its 'artificial human' project
Samsung has been drumming up hype for its Neon 'artificial human' project, and now it's clearer as to what that initiative entails. Project lead Pranav Mistry has posted a teaser effectively confirming that Neon is nothing less than an effort to create lifelike avatars. The digital beings are based on "captured data," but can generate their own expressions, movements and sayings in multiple languages. While the static image doesn't reveal much more than that, some recent discoveries help fill in the gaps.
AI avatars of Chinese authors could soon narrate audiobooks
The Chinese search engine Sogou isn't stopping at AI news anchors. The company has created "lifelike" avatars of two Chinese authors, and it plans to have them narrate audiobooks in video recordings. According to the BBC, Sogou used AI, text-to-speech technology and video clips from the China Online Literature+ conference to create avatars of authors Yue Guan and Bu Xin Tian Shang Diao Xian Bing.
Why putting googly eyes on robots makes them inherently less threatening
At the start of 2019, supermarket chain Giant Food Stores announced it would begin operating customer-assisting robots -- collectively dubbed Marty -- in 172 East Coast locations. These autonomous machines may navigate their respective store using a laser-based detection system, but they're also outfitted with a pair of oversize googly eyes. This is to, "[make] it a bit more fun," Giant President Nick Bertram told Adweek in January, and "celebrate the fact that there's a robot."
Sony takes SOEmote live for EverQuest II, lets gamers show their true CG selves (video)
We had a fun time trying Sony's SOEmote expression capture tech at E3; now everyone can try it. As of today, most EverQuest II players with a webcam can map their facial behavior to their virtual personas while they play, whether it's to catch the nuances of conversation or drive home an exaggerated game face. Voice masking also lets RPG fans stay as much in (or out of) character as they'd like. About the only question left for those willing to brave the uncanny valley is when other games will get the SOEmote treatment. Catch our video look after the break if you need a refresher.
Samsung files patents for robot that mimics human walking and breathing, ratchets up the creepy factor
As much as Samsung is big on robots, it hasn't gone all-out on the idea until a just-published quartet of patent applications. The filings have a robot more directly mimicking a human walk and adjusting the scale to get the appropriate speed without the unnatural, perpetually bent gait of certain peers. To safely get from point A to point B, any path is chopped up into a series of walking motions, and the robot constantly checks against its center of gravity to stay upright as it walks uphill or down. All very clever, but we'd say Samsung is almost too fond of the uncanny valley: one patent has rotating joints coordinate to simulate the chest heaves of human breathing. We don't know if the company will ever put the patents to use; these could be just feverish dreams of one-upping Honda's ASIMO at its own game. But if it does, we could be looking at Samsung-made androids designed like humans rather than for them.
MIT unveils computer chip that thinks like the human brain, Skynet just around the corner
It may be a bit on the Uncanny Valley side of things to have a computer chip that can mimic the human brain's activity, but it's still undeniably cool. Over at MIT, researchers have unveiled a chip that mimics how the brain's neurons adapt to new information (a process known as plasticity) which could help in understanding assorted brain functions, including learning and memory. The silicon chip contains about 400 transistors and can simulate the activity of a single brain synapse -- the space between two neurons that allows information to flow from one to the other. Researchers anticipate this chip will help neuroscientists learn much more about how the brain works, and could also be used in neural prosthetic devices such as artificial retinas. Moving into the realm of "super cool things we could do with the chip," MIT's researchers have outlined plans to model specific neural functions, such as the visual processing system. Such systems could be much faster than digital computers and where it might take hours or days to simulate a simple brain circuit, the chip -- which functions on an analog method -- could be even faster than the biological system itself. In other news, the chip will gladly handle next week's grocery run, since it knows which foods are better for you than you ever could.
Microsoft shows off prototype avatar that will haunt your dreams
Microsoft's chief research and strategy officer Craig Mundie wants to show you the haunting bridge his team has built over the uncanny valley. Employing Kinect hardware and custom PC software, the research team at Microsoft has created an unnervingly realistic new avatar that can handle text-to-speech when combined with a script and can recognize the words in any order. "This is a way to create a synthetic model of people that will be acceptable to them when they would look at them on a television or in an Avatar Kinect kind of scenario," Mundie told USA Today in a video interview. "There's no reason that we couldn't do that in real time by feeding the information that we get from a Kinect sensor, including its audio input and its 3D modeling, spacial representation, and couple that to the body and the gesture recognition in order to create a full body avatar, that has photo realistic features and full facial animation," he added. This impressive (if not somewhat terrifying) demo is still very much in the prototype phase, however, and Mundie said it would be "some time before we see it show up in products." We're just hoping those first "products" aren't T-1000s.
The Daily Grind: How realistic do you like your avatars?
From the highly detailed characters in EVE Online to the beautifully impressionistic avatars in LOVE, there's a wide variety of avatar types available in MMO games. Whether you use your avatar purely for humorous results as the above EVE Online pilot did, attempt to create a character that looks at least somewhat like you, or are out to create a completely foreign fantasy being to role-play, choice abounds these days. Character creators go from automatically generated with no choice to insanely complex and detailed -- and everywhere in-between. Nonetheless, with the sheer number of options out there, this morning we thought we'd ask which you prefer? Do you like your avatar so realistic that it's almost bordering on uncanny valley? Perhaps you prefer more middle-of-the-road options like Guild Wars or other games in that general neighborhood -- not too realistic, not too cartoony? Or do you prefer to go as far into your imagination as the character creator will let you with avatars such as the ones in LOVE or World of Warcraft's stylistic, non-human offerings? Every morning, the Massively bloggers probe the minds of their readers with deep, thought-provoking questions about that most serious of topics: massively online gaming. We crave your opinions, so grab your caffeinated beverage of choice and chime in on today's Daily Grind!
Cambridge developing 'mind reading' computer interface with the countenance of Charles Babbage (video)
For years now, researchers have been exploring ways to create devices that understand the nonverbal cues that we take for granted in human-human interaction. One of the more interesting projects we've seen of late is led by Professor Peter Robinson at the Computer Laboratory at the University of Cambridge, who is working on what he calls "mind-reading machines," which can infer mental states of people from their body language. By analyzing faces, gestures, and tone of voice, it is hoped that machines could be made to be more helpful (hell, we'd settle for "less frustrating"). Peep the video after the break to see Robinson using a traditional (and annoying) satnav device, versus one that features both the Cambridge "mind-reading" interface and a humanoid head modeled on that of Charles Babbage. "The way that Charles and I can communicate," Robinson says, "shows us the future of how people will interact with machines." Next stop: uncanny valley!
President Obama takes a minute to chat with our future robot overlords (video)
President Obama recently took some time out of the APEC Summit in Yokohama to meet with a few of Japan's finest automatons, and as always he was one cool cat. Our man didn't even blink when confronted with this happy-go-lucky HRP-4C fashion robot, was somewhat charmed by the Paro robotic seal, and more than eager to take a seat in one of Yamaha's personal transport robots. But who wouldn't be, right? See him in action after the break.
Flobi robot head realistic enough to convey emotions, not realistic enough to give children nightmares (hopefully)
We've seen our fair share of robots meant to convey emotions, and they somehow never fail to creep us out on some level. At least Flobi, the handiwork of engineers at Bielefeld University in Germany, eschews "realism" for cartoon cuteness. But don't let it fool you, this is a complicated device: about the size of a human head, it features a number of actuators, microscopes, gyroscopes, and cameras, and has the ability to exhibit a wide range of facial expressions by moving its eyes, eyebrows and mouth. The thing can even blush via its cheek-mounted LEDs, and it can either take on the appearance of a male or female with swappable hair and facial features. And the cartoonish quality of the visage is deliberate. According to a paper submitted by the group to the ICRA 2010 conference, the head is "far enough from realistic not to trigger unwanted reactions, but close enough that we can take advantage of familiarity with human faces." Works for us! Video after the break. [Thanks, Simon]
Video: HRP-4C 'fashion robot' is getting married, won't shut up about it
Ok, ok, "she" isn't really getting married, that would be illegal outside of Massachusetts. Besides, HRP-4C is already hitched, apparently, to her creator Kazuhito Yokoi who appeared at the Osaka fashion show dressed in a tuxedo. Looking wobbly, perplexed, and creepy as hell, HRP-4C bravely slipped on a helmet of taffeta and lace in what's being called her first professional runway appearance. The crowd seemed to enjoy it until HRP-4C turned on them with her green lasers. Really, see for yourself in the unsettling video embedded after the break.[Via Crave]
Philosony: Who let the - uh - simian out?
Pet simulators have come a long way since our English teachers were giving us detention for trying to feed our pathetic, whimpering beeping Tamagotchi in class. Now we've got simulated dogs for our handhelds and virtual animals to keep our virtual people company on our PC. With the development of better robotics we've even seen geek's best friend jump through the LCD and follow us into tangible world. By this time next year (hopefully!) we should have a new kind of digital cuteness to keep us amused when no one is watching - the EyePet. I recently wrote about some of the difficulties beyond realistic rendering that developers face when trying to make us emotionally attached to a character. Human behaviors and emotions are so much more difficult to mimic than those of animals, no matter how abstract. You'd find me silently weeping for the destruction of little Metal Gear Mk. II long before I'd be shedding tears for Solid Snake. Why is it easier to evoke a nurturing and protective instinct in a virtual pet than in a virtual human?
New modeling technology breathes life into animation
Ask any animation modeler about the "uncanny valley," and you're sure to get at least a grimace, if not a groan. Said term describes the long-standing barrier which refers to the perception that "animation looks less realistic as it approaches human likeness." Image Metrics is hoping that a newfangled approach used to create Emily (pictured) will finally allow animations to look more like humans and less like "corpses." As you could probably surmise, the secret is the tech's ability to survey and replicate the most subtle of movements, though even Raja Koduri, chief technology officer in graphics at AMD, doesn't see the line between reality and fiction being blurred before 2020. We'll see what Emily's posse has to say about that.[Thanks, Przemek]
Warning: This dog bites!
Kylie Prymus is the first columnist for PS Fanboy. A Ph.D candidate in philosophy, Kylie specializes in the sociology of technology. Through this new weekly column, Kylie will explore the impact of PlayStation on thought and culture. I'm talking about this dog. Not just any dog. The Big Dog. It may not have teeth (though I'm sure those servo-motors could put a hurtin' on) but when I was shown this video earlier in the week I felt sure it had taken a few nips at my soul. Cut the dog down to two legs and increase its size tenfold and you've got a nearly perfect real life version of the Geckos from MGS4. While I've mentioned MGS4 to a greater or lesser degree in previous columns, thus far I've avoided tackling anything in the game head on. This is largely because, as readers of my last post are aware, my PS3 is several states away and I haven't been able to watch play the game through to its conclusion. Don't worry, I'll pick up Snake's saga in a couple weeks (he's at the front of the line just ahead of Niko and Zack), but I should be able to make a few observations about the game given what I have played (up to the middle of Act 3). If you haven't yet done so I suggest you hit the first link above and check out the video of Big Boss Dog.
Real Bowser stalks our nightmares
Back in March, this realistic depiction of Mario made us hide beneath our beds, though that was nothing compared with the horrors of what followed.Now, some twisted internet japester has reinvented Bowser in a similar fashion. Gone is the happy-go-lucky Bowser, the ultimately lovable, pantomime-esque villain with an addiction to kidnapping royalty. And in his place? Teeth. Scales. Leathery flesh. Claws that could rip through a plumber's torso like a warm knife through butter. Please, won't somebody think of the children us?Creep apprehensively past the break for the full image.
Video: Japan's oldest robot reanimated -- writes poetry, hits on your girlfriend
Japan's oldest "modern" robot -- the 10-foot, 6-inch GakuTenSoku -- has been awakened in Japan. Gone are the inflatable rubber tubes of the original 1928 android build by biologist Makoto Nishimura. The bot now tilts its head, moves his eyes, smiles, and puffs out his cheeks thanks to a $200,000, computer-controlled, pneumatic-servo makeover. While nothing compared to his modern offspring, GakuTenSoku still manages to creep us the hell out. On display at the renovated Osaka Science Museum starting July 18th. Video after the break.[Via Impress]