neuralnetwork

Latest

  • Getty Images/Brand X

    Chip promises brain-like AI in your mobile devices

    by 
    Jon Fingas
    Jon Fingas
    02.07.2016

    There's one big, glaring reason why you don't see neural networks in mobile devices right now: power. Many of these brain-like artificial intelligence systems depend on large, many-core graphics processors to work, which just isn't practical for a device meant for your hand or wrist. MIT has a solution in hand, though. It recently revealed Eyeriss, a chip that promises neural networks in very low-power devices. Although it has 168 cores, it consumes 10 times less power than the graphics processors you find in phones -- you could stuff one into a phone without worrying that it will kill your battery.

  • Microsoft is reportedly buying SwiftKey (update: official)

    by 
    Jon Fingas
    Jon Fingas
    02.02.2016

    Microsoft has made a habit out of acquiring significant mobile app makers, but its next deal may cut particularly close to the bone for smartphone fans. The Financial Times' sources understand that Microsoft is on the cusp of acquiring software keyboard maker SwiftKey for $250 million. The exact intent isn't clear -- Microsoft isn't commenting. However a more advanced input method may be only part of why it's interested.

  • Google's latest partnership could make smartphones smarter

    by 
    Aaron Souppouris
    Aaron Souppouris
    01.28.2016

    Google has signed a deal with Movidius to include its Myriad 2 MA2450 processor in future devices. The search giant first worked with Movidius back in 2014 for its Project Tango devices, and it's now licensing the company's latest tech to "accelerate the adoption of deep learning within mobile devices."

  • Scientists make a 'true' neural network using brain-like chips

    by 
    Jon Fingas
    Jon Fingas
    01.26.2016

    Many people have built brain-like neural networks that can learn on their own, but they're typically using plain old silicon to do it. Wouldn't it be better if the chips themselves were brain-like? A mix of Italian and Russian researchers might help. They've created a neural network based on plastic memristors, or resistors that remember their previous electrical resistance. Since they effectively work like brain synapses, they're ideal for creating "true" neural networks where signal transfers create long-lasting effects. And importantly, the choices of technology and materials allows them to be very small (as tiny as 10 nanometers, in theory) without resorting to exotic substances -- you could design a neural network as compact as a regular chip without reinventing the wheel.

  • An AI algorithm can draw letters as well as a human

    by 
    Steve Dent
    Steve Dent
    12.11.2015

    Researchers claim to have made a breakthrough in artificial intelligence by giving machines cognitive powers similar to humans. The team from MIT, York University and the University of Toronto first trained an algorithm to draw characters in 50 languages by studying the required pen strokes. Once completed, it was able to successfully draw a new character that it had never seen before, meaning it had essentially "learned" the skill. That might not sound impressive, because we humans can do it easily. But so far, similar feats have only been done by large neural networks that require huge databases of images and learn more by brute force than smarts.

  • ICYMI: Plant-powered lamps, livestreaming AI and more

    by 
    Kerry Davis
    Kerry Davis
    11.27.2015

    #fivemin-widget-blogsmith-image-852982{display:none;} .cke_show_borders #fivemin-widget-blogsmith-image-852982, #postcontentcontainer #fivemin-widget-blogsmith-image-852982{width:570px;display:block;}try{document.getElementById("fivemin-widget-blogsmith-image-852982").style.display="none";}catch(e){}Today on In Case You Missed It: A coder from the Netherlands used a live webcam feed for a walk around Amsterdam, running neural network code that identified everything in view. Despite some obvious set-backs (it thought the creator was wearing a suit when he really wore a zip-up hoodie, natch), it impressively identified boats in a river and stacks of bikes. Researchers in Peru invented prototype lamps that run off of the bacteria of living plants. And a new security system for the camera-hacking adverse works by setting up a motion-detecting mesh network.

  • This selfie AI is trained to know a good shot from a bad one

    by 
    Mat Smith
    Mat Smith
    10.30.2015

    What's the difference between a good and bad selfie? A neural network artificial intelligence, trained on a diet of over two million selfies, apparently knows. First, the important findings: good selfies involve being a woman -- and one that's tilting their head. A small forehead and longer hair are good points too. Filters help, as do borders. For men, while they didn't rank in the AI's top 100 (ugh, bias!), the bot advises that you show your full head and shoulders. Longer hairstyles (and ones combed upwards) don't hurt mens' chances either. Its creator, Andrej Karpathy, who has worked with Google Research and DeepMind, explains that it's a convoluted neural network which does the image recognizing and, er, judging. You can judge yourself (for yourself) using the network's Twitter bot (61.7 percent here), or read on for how it learned to do all that.

  • Simulated brain cells give robot instinctive navigation skills

    by 
    Andrew Tarantola
    Andrew Tarantola
    10.21.2015

    A team of researchers at Singapore's Agency for Science, Technology and Research (A*STAR) announced on Wednesday that they had taught a robot how to navigate on its own, in much the same way that humans and other animals do. They reportedly accomplished this feat by digitally replicating two types of neurons that help animals geolocate naturally.

  • You'll never believe what neural networks can do now

    by 
    Steve Dent
    Steve Dent
    10.16.2015

    Clickbait headlines are the lowest form of journalism, but could they be written by a machine? After all, the Associated Press is using one to write complete financial articles, terrible as they are. Developer Lars Eidnes figured that "if this sort of writing truly is formulaic and unoriginal, we should be able to produce it automatically." Rather than building another Upworthy-style headline generator, however, Eidnes took it up a notch by enlisting a so-called recurrent neural network (RNN). That's the same type of machine learning used by SwiftKey, for one, on its beta SwiftKey Neural word-prediction app.

  • Brain simulation breakthrough reveals clues about sleep, memory

    by 
    Steve Dent
    Steve Dent
    10.09.2015

    The Blue Brain Project is a vast effort by 82 scientists worldwide to digitally recreate the human brain. While still far from that goal, the team revealed a breakthrough that has already provided insight into sleep, memory and neurological disorders. They created a simulation of a third of a cubic millimeter of a rat's brain. While that might not sound like much, it involves 30,000 neurons and 37 million synapses. In addition, the simulated level of biological accuracy is far beyond anything so far. It allowed them to reproduce known brain activities -- such as how neurons respond to touch -- and has already yielded discoveries about the brain that were impossible to get biologically.

  • SwiftKey's latest keyboard is powered by a neural network

    by 
    Aaron Souppouris
    Aaron Souppouris
    10.08.2015

    A new SwiftKey keyboard hopes to serve you better typing suggestions by utilizing a miniaturized neural network. SwiftKey Neural does away with the company's tried-and-tested prediction engine in favor of a method that mimics the way the brain processes information. It's a model that's typically deployed on a grand scale for things like spam and phishing prevention in Gmail or image recognition, but very recent advancements have seen neural networks creep into phones through Google Translate, which uses one for offline text recognition. According to SwiftKey, this is the first time it's been used on a phone keyboard.

  • Trippy art project has you exploring fractals in virtual reality

    by 
    Jon Fingas
    Jon Fingas
    09.27.2015

    Fractal art can already be mesmerizing when you're staring at a 2D picture, but artist Matteo Zamagni has found a way to kick things up a notch. His Nature Abstraction art project has you diving into 3D fractals thanks to both an Oculus Rift virtual reality headset and the almost psychedelic imagery from Google's neural network-based Deepdream. The result, as you'll see below, is rather hypnotic -- you're floating through formula-based shapes that are at once familiar and completely alien. Zamagni sees it as a way to challenge the accuracy of your perceptions. You're sadly too late to see this installation in person (it was part of an exhibit at London's Barbican this August), but here's hoping that it resurfaces... it looks like a wild mind trip.

  • Algorithm turns any picture into the work of a famous artist

    by 
    Sean Buckley
    Sean Buckley
    08.31.2015

    A group of German researchers have created an algorithm that basically amounts to the most amazing Instagram filter ever conceived: a convolutional neural network that can convert any photograph into a work of fine art. The process takes an hour (sorry, it's not actually coming to a smartphone near you), and the math behind it is horrendously complicated, but the results speak for themselves.

  • IBM wires up 'neuromorphic' chips like a rodent's brain

    by 
    Andrew Tarantola
    Andrew Tarantola
    08.17.2015

    IBM has been working with DARPA's Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program since 2008 to develop computing systems that work less like conventional computers and more like the neurons inside your brain. After years of development, IBM has finally unveiled the system to the public as part of a three-week "boot camp" training session for academic and government researchers.

  • Google's chatbot learned it all from movies

    by 
    Mariella Moon
    Mariella Moon
    07.02.2015

    Chatbots are pretty common these days -- a simple search can surface numerous variants you can talk to on a lonely Friday night. The one Google is developing, however, isn't your run-of-the-mill chatbot: it wasn't programmed to respond to questions a specific way. Instead, it uses neural networks (a collection of machines that mimic the neurons in the human brain) to learn from existing conversations and conjure up its own answers. Mountain View, along with Facebook and Microsoft, already uses neural networks for other purposes, such as to create works of art, to identify objects in images and to recognize spoken words.

  • Facebook and Google get neural networks to create art

    by 
    Jon Fingas
    Jon Fingas
    06.20.2015

    For Facebook and Google, it's not enough for computers to recognize images... they should create images, too. Both tech firms have just shown off neural networks that automatically generate pictures based on their understanding of what objects look like. Facebook's approach uses two of these networks to produce tiny thumbnail images. The technique is much like what you'd experience if you learned painting from a harsh (if not especially daring) critic. The first algorithm creates pictures based on a random vector, while the second checks them for realistic objects and rejects the fake-looking shots; over time, you're left with the most convincing results. The current output is good enough that 40 percent of pictures fooled human viewers, and there's a chance that they'll become more realistic with further refinements.

  • Brain-like circuit performs human tasks for the first time

    by 
    Jon Fingas
    Jon Fingas
    05.12.2015

    There are already computer chips with brain-like functions, but having them perform brain-like tasks? That's another challenge altogether. Researchers at UC Santa Barbara aren't daunted, however -- they've used a basic, 100-synapse neural circuit to perform the typical human task of image classification for the first time. The memristor-based technology (which, by its nature, behaves like an 'analog' brain) managed to identify letters despite visual noise that complicated the task, much as you would spot a friend on a crowded street. Conventional computers can do this, but they'd need a much larger, more power-hungry chip to replicate the same pseudo-organic behavior.

  • Microsoft's imaging tech is (sometimes) better than you at spotting objects

    by 
    Jon Fingas
    Jon Fingas
    02.15.2015

    Many computer vision projects struggle to mimic what people can achieve, but Microsoft Research thinks that its technology might have already trumped humanity... to a degree, that is. The company has published results showing that its neural network technology made fewer mistakes recognizing objects than humans in an ImageNet challenge, slipping up on 4.94 percent of pictures versus 5.1 percent for humans. One of the keys was a "parametric rectified linear unit" function (try saying that three times fast) that improves accuracy without any real hit to processing performance.

  • Computers can now describe images using language you'd understand

    by 
    Jon Fingas
    Jon Fingas
    11.18.2014

    Software can now easily spot objects in images, but it can't always describe those objects well; "short man with horse" not only sounds awkward, it doesn't reveal what's really going on. That's where a computer vision breakthrough from Google and Stanford University might come into play. Their system combines two neural networks, one for image recognition and another for natural language processing, to describe a whole scene using phrases. The program needs to be trained with captioned images, but it produces much more intelligible output than you'd get by picking out individual items. Instead of simply noting that there's a motorcycle and a person in a photo, the software can tell that this person is riding a motorcycle down a dirt road. The software is also roughly twice as accurate at labeling previously unseen objects when compared to earlier algorithms, since it's better at recognizing patterns.

  • Google's latest object recognition tech can spot everything in your living room

    by 
    Jon Fingas
    Jon Fingas
    09.08.2014

    Automatic object recognition in images is currently tricky. Even if a computer has the help of smart algorithms and human assistants, it may not catch everything in a given scene. Google might change that soon, though; it just detailed a new detection system that can easily spot lots of objects in a scene, even if they're partly obscured. The key is a neural network that can rapidly refine the criteria it's looking for without requiring a lot of extra computing power. The result is a far deeper scanning system that can both identify more objects and make better guesses -- it can spot tons of items in a living room, including (according to Google's odd example) a flying cat. The technology is still young, but the internet giant sees its recognition breakthrough helping everything from image searches through to self-driving cars. Don't be surprised if it gets much easier to look for things online using only vaguest of terms.