IBM’s new brain-mimicking chip could power the Internet of Things
The new processor, code-named “TrueNorth,” has 5.4 billion transistors woven into an on-chip network of 4,096 neurosynaptic cores, producing the equivalent of 256 million synapses, much larger than the 2011 design of roughly 260,000 synapses.
IBM has also tethered 16 of these chips together in four four-by-four arrays, which collectively offer the equivalent of 16 million neurons and 4 billion synapses, showing that the design can be easily scaled up for larger implementations.
The work originated in 2008 as a U.S. Defense Advanced Research Projects Agency (DARPA) project, under the project name of Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE).
The chips represent a radical break in design from today’s von Neumann architecture of computing, where computations are quickly made in a serial fashion.
This chip architecture approximated how the human brain works, in that each “neurosynaptic core” has its own memory (“synapses”), a processor (“neuron”) and communication conduit (“axons”), which all operate together in an event-driven fashion.
By working together, these cores could provide nuanced pattern recognition and other sensing capabilities, in much the same way a brain does.
Like the brain, this chip requires very little power—only 70mW during typical operation, which is an order of magnitude lower than what standard processors would require to execute the same operations. Samsung fabricated the prototype chip using a 28-nanometer lithographic process.
Requiring so little power—less than that required by a hearing aid—opens up a vast array of potential uses, especially on devices with limited power sources.
For instance, a processor could be embedded in a mobile device or a sensor, where it could be trained to do object recognition on auditory, visual or multi-sensory data sources, a computationally intensive task that now requires a dedicated server. Such jobs could easily be done on a remote device itself, eliminating the need to stream video to a data center.
“The sensor becomes the computer,” Modha said.
The brain-inspired architecture is not designed to replace standard processors, but rather be used in conjunction with them, to tackle jobs that require lots of computation operations to be carried out in parallel.
In the data center, the chips could be used in co-processor acceleration cards for running machine-learning neural networks, Modha said. Many machine learning algorithms now being commercially used could be easily adapted to this architecture, which could carry out highly parallel operations in a more energy-efficient manner, Modha said.
IBM is still investigating how to commercialize this processor, Modha said, and has made no commitments to either manufacture the chip itself or license the design out to others. Modha said that, from early fabrication work, he saw no “fundamental risks” in producing these chips in large volumes.
The company is also developing compilers and related software to make these processors easy to use.