Building Chips That Can Learn

Machine learning, AI, require more than just power and performance.

The idea that devices can learn optimal behavior rather than relying on more generalized hardware and software is driving a resurgence in artificial intelligence, machine leaning, and cognitive computing. But architecting, building and testing these kinds of systems will require broad changes that ultimately could impact the entire semiconductor ecosystem.

Many of these changes are well understood. There is a need for higher performance per watt and per operation, because all of these developments will drive a huge increase in the amount of data that needs to be processed and stored. Other changes are less obvious, and will require a certain amount of guesswork. For example, what will chips look like after they have “learned” to optimize data in a real-world setting? The semiconductor industry is used to measuring reliability as a function of performance that degrades over time. In contrast, a well-designed adaptive learning system in theory should improve over time.

Part of this shift will be evolutionary, rolled out as technology progresses. Some will be closer to revolutionary, based upon the functioning of the human brain, which is remarkably more efficient than any technology yet developed. In both cases, the amount of research and testing being done in this field is exploding, particularly for such applications as robotics, data management and processing, industrial applications, and for vision systems in driver-assisted or fully autonomous vehicles.

“We’ve been having a lot of discussions lately about cognitive computing,” said Wally Rhines, chairman and CEO of Mentor Graphics. “When we’re born, all the cells in our brain are the same. Over time, those cells specialize into regions such as eyesight. The thinking is that if you start with identical (semiconductor) memory cells, you can specialize them with time. And based on the applications you expose the chip to, stored memory accumulates more data over time. The brain is largely about pattern recognition. What differentiates us from animals is predictive pattern recognition. That requires hierarchical memory and invariant memory. So you don’t store every pattern, but if you see a face in the shadows you can still recognize it. The human brain does this much more effectively than technology.”

[button href=”http://semiengineering.com/building-chips-that-can-learn/” align=”left”]Read more here[/button]

Article from Ed Spearling, Semiconductor Engineering