Lots of tech companies including Apple, Google, Microsoft, NVIDIA and Intel itself have created chips for image recognition and other deep-learning chores. However, Intel is taking another tack as well with an experimental chip called "Loihi." Rather than relying on raw computing horsepower, it uses an old-school, as-yet-unproven type of "nueromorphic" tech that's modeled after the human brain.
Intel has been exploring neuromorphic tech for awhile, and even designed a chip in 2012. Instead of logic gates, it uses "spiking neurons" as a fundamental computing unit. Those can pass along signals of varying strength, much like the neurons in our own brains. They can also fire when needed, rather than being controlled by a clock like a regular processor.
Intel's Loihi chip has 1,024 artificial neurons, or 130,000 simulated neurons with 130 million possible synaptic connections. That's a bit more complex than, say, a lobster's brain, but a long ways from our 80 billion neurons.
Human brains work by relaying information with pulses or spikes, strengthening frequent connections and storing the changes locally at synapse interconnections. As such, brain cells don't function alone, because the activity of one neuron directly affects others -- and groups of cells working in concert lead to learning and intelligence.
By simulating this behavior with the Loihi chip, it can (in theory) speed up machine learning while reducing power requirements by up to 1,000 times. What's more, all the learning can be done on-chip, instead of requiring enormous datasets. If incorporated into a computer, such chips could also learn new things on their own, rather than remaining ignorant of tasks they hasn't been taught specifically.
These types of chips would give us the sort of AI behavior we expect (and fear) -- namely, robots and other devices that can learn as they go. "The test chip [has] enormous potential to improve automotive and industrial applications as well as personal robots," Intel says.
That all sounds good, but so far, neuromorphic chips have yet to prove themselves next to current, brute-force deep-learning technology. IBM has also developed a neuromorphic chip called "TrueNorth," for instance, with 4096 processors that simulate around 256 million synapses. However, Facebook's deep learning specialist Yann LeCun said that chip wouldn't easily be able to do tasks like image recognition using the NeuFlow convolution model he designed.
Intel has also admitted that its neuromorphic chip wouldn't do well with some types of deep-learning models. Via its acquisition of Movidius and MobilEye, however, it's already got a line of machine vision and learning chips that do work with current AI algorithms. It also acquired a company called Nervana last year to take on AI cloud processing leader NVIDIA.
For Loihi, it plans to give the chips to select "leading university and research institutions" focused on artificial intelligence in the first half of 2018. The aim is test the chip's feasibility for new types of AI applications to boost further development. It will build the chips using its 14-nanometer process technology and release the first test model in November.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Popular on Engadget
The best consoles, games and accessories for students