Latest in Science

Image credit:

IBM wants to accelerate AI learning with new processor tech

"Resistive" chips could speed up neural network training 30,000 times.
Steve Dent, @stevetdent
March 28, 2016
Share
Tweet
Share

Sponsored Links

Deep neural networks (DNNs) can be taught nearly anything, including how to beat us at our own games. The problem is that training AI systems ties up big-ticket supercomputers or data centers for days at a time. Scientists from IBM's T.J. Watson Research Center think they can cut the horsepower and learning times drastically using "resistive processing units," theoretical chips that combine CPU and non-volatile memory. Those could accelerate data speeds exponentially, resulting in systems that can do tasks like "natural speech recognition and translation between all world languages," according to the team.

So why does it take so much computing power and time to teach AI? The problem is that modern neural networks like Google's DeepMind or IBM Watson must perform billions of tasks in in parallel. That requires numerous CPU memory calls, which quickly adds up over billions of cycles. The researchers debated using new storage tech like resistive RAM that can permanently store data with DRAM-like speeds. However, they eventually came up with the idea for a new type of chip called a resistive processing unit (RPU) that puts large amounts of resistive RAM directly onto a CPU.

Google's Deepmind AI topples Go champ Lee Seedol

Such chips could fetch the data as quickly as they can process it, dramatically decreasing neural network training times and power required. "This massively parallel RPU architecture can achieve acceleration factors of 30,000 compared to state-of-the-art microprocessors ... problems that currently require days of training on a datacenter-size cluster with thousands of machines can be addressed within hours on a single RPU accelerator, " according to the paper.

The scientists believe its possible to build such chips using regular CMOS technology, but for now RPU's are still in the research phase. Furthermore, the technology behind it, like resistive RAM, has yet to be commercialized. However, building chips with fast local memory is a logical idea that could dramatically speed up AI tasks like image processing, language mastery and large-scale data analysis -- you know, all the things experts say we should be worried about.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Comment
Comments
Share
Tweet
Share

Popular on Engadget

The Apple Watch Series 6 is already discounted at Amazon

The Apple Watch Series 6 is already discounted at Amazon

View
Amazon's $500 'Prime Bike' is a connected spin bike made by Echelon

Amazon's $500 'Prime Bike' is a connected spin bike made by Echelon

View
How and where to pre-order an Xbox Series X or S

How and where to pre-order an Xbox Series X or S

View
Jabra's Elite 85t true wireless earbuds offer adjustable ANC for $229

Jabra's Elite 85t true wireless earbuds offer adjustable ANC for $229

View
Razer made wireless versions of its most popular accessories

Razer made wireless versions of its most popular accessories

View

From around the web

Page 1Page 1ear iconeye iconFill 23text filevr