Thursday, December 23, 2021

Can the neuromorphic processor architecture set off a new wave?

According to technical experts, it was Carver Mead who coined the term “Moore’s Law” ten years after Gordon Moore published a landmark article in Electronics Magazine in 1965, “Pushing More components into Integrated Circuits.” In the next few decades, the law outlined in this article changed the world – that is, every two years or so, semiconductor companies will be able to double the number of transistors manufactured on a single semiconductor chip.

According to technical experts, it was Carver Mead who coined the term “Moore’s Law” ten years after Gordon Moore published a landmark article in Electronics Magazine in 1965, “Pushing More Components into Integrated Circuits.” In the next few decades, the law outlined in this article changed the world – that is, every two years or so, semiconductor companies will be able to double the number of transistors manufactured on a single semiconductor chip.

The doubling of transistors every two years has most significantly brought about a faster exponential increase in computing power. In addition to getting more transistors from Moore’s Law, we also get faster, cheaper, and more energy-efficient transistors. All these factors together allow us to build faster, more complex, and higher-performance computing devices.

By 1974, Robert Dennard observed that as the process geometry decreases and the density, speed, and energy efficiency increase three times, the calculated power efficiency will be even faster than the number of transistors. This trend called “Dennad Scaling” has been around us for about three decades, and computing performance (and more importantly, power, it turns out) has driven unprecedented exponential improvements.

All of these improvements in computing power are based on the von Neumann processor architecture developed by John Von Neumann and others in 1945, and are recorded in the unfinished report “EDIMAC Report First Draft”. Ironically, the most impressive technological revolution in history was built on a design that was unfinished a century and a half ago. With all the significant advances in the field of digital computing in the era of Moore’s Law, the basic computing architecture that has been 75 years old today remains basically unchanged.

Is the von Neumann architecture just the best way to perform calculations? of course not. In the words of Winston Churchill, Von Neumann is the worst computing architecture except for all other architectures. The advantage of VonNeumann lies in its flexibility and area efficiency. It can handle almost any arbitrarily complex application without the need for the processor to expand the number of transistors according to the size of the problem.

In the past, von Neumann’s architectural efficiency was very important before so many components were packed into integrated circuits. We can build 4-bit, 8-bit or 16-bit von Neumann processors with few transistors and run large applications at acceptable speeds. But with the emergence of Moore’s Law, transistors are gradually approaching zero cost. Therefore, since the number of available transistors is almost unlimited, the value of building a processor with a smaller number of transistors is greatly reduced.

At the same time, even if Moore’s Law goes all out, the value extracted from each advanced node is reduced. Dennard Scaling ended around 2005, which forced us to switch from building larger/faster von Neumann processors to “more” von Neumann processors. This game filled more cores with integrated circuits, and the scalability of Von Neumann to multi-core brought its own limitations.

What is even more sad is that Moore’s Law has not continued to flourish. The cost of each of the recent process nodes has increased exponentially, and the actual benefits have decreased proportionally. The result of this is that even if technically speaking, we should be able to manufacture more generations of denser chips, the cost/benefit ratio of doing so makes it less and less attractive.

Now, we need other drivers besides Moore’s Law to keep the pace of technological progress.

Obviously, as a single entire computing architecture, von Neumann is also about to die. The recent AI revolution has accelerated the development of alternative products to Von Neumann. AI, especially AI completed with convolutional neural networks, is an incredibly computationally intensive calculation, which is an application that is not particularly suitable for Von Neumann. This makes us start to move from a large array of similar computing elements to a complex configuration of heterogeneous elements including the von Neumann method and the non-von Neumann method.

Neuromorphic architecture is one of the most promising non-von Neumann artificial intelligence methods. In the late 1980s, Carver Mead (yes, it is said that the creator of “Moore’s Law” was the same person) observed that on the development route at the time, the energy consumed by the von Neumann processor was the same as that of the human brain. The energy used for calculation is millions of times higher. His theory is that more effective computing circuits can be constructed by simulating the neuron structure of the human brain. Mead used transistor currents to simulate neuron ion flow, and based on this idea, came up with a method called neuromorphic calculation.

At that time, neuromorphic computing could be regarded as an analog event, in which neurons triggered each other with constantly changing voltages or currents. However, the world is unwavering on the path of optimizing the binary world of digital design. Analog circuits cannot be scaled like digital indices, so the development of neuromorphic computing is beyond the mainstream track of Moore’s Law.

However, now the situation has changed.

In the long run, we see that most of the analog functions are included in the digital approximation. The neuromorphic processor has been implemented by the so-called “spike neural network” (SNN), which relies on a single peak of each neuron to Activate the neuron chain below the neuron. These networks are completely asynchronous, and activation rather than sending values ​​depends on the time of the spike. Using this technology, the most advanced batch CMOS digital technology is used to realize a neuromorphic processor. This means that neuromorphic structures can ultimately benefit from Moore’s Law. As a result, several practical neuromorphic processors have been built and tested, and the results are impressive and encouraging.

One example we reported about two years ago is Brainchip's Akida neuromorphic processor, which was launched in December 2020. Brainchip claims that the power consumption of its devices is 90% to 99% lower than traditional CNN-based solutions. As far as we know, this is one of the first neuromorphic technologies to enter a wide range of commercial markets, and the potential application prospects are huge.

Brainchip provides the IP version and SoC of its technology and the complete implementation in silicon. Almost all systems that can take advantage of “edge” artificial intelligence can benefit from this type of energy saving, and a distinction can often be made between doing and not doing edge artificial intelligence.

Also in December 2020, Intel provided the latest information on its neuromorphic research test chip called Loihi and their “Intel Neuromorphic Research Community (INRC)”, both of which were also announced two years ago. Loihi has benchmarked energy efficiency in a wide range of applications including voice command recognition, gesture recognition, image retrieval, optimization and search, and robotics. Its energy consumption is 30-1,000 times higher than CPU and GPU, and faster 100 times. Equally important, in stark contrast to CNN-based systems, this architecture enables itself to perform fast and continuous learning, while CNN-based systems tend to undergo an intense training phase, which creates static inference models. . Intel stated that they are seeking to increase energy efficiency by 1,000 times and performance by 100 times.

Not all problems will turn to neuromorphic, and algorithms that are well suited to today’s deep learning techniques are the obvious winners. Intel is also evaluating algorithms “inspired by neuroscience” that can simulate processes found in the brain. Finally, they are studying the problem of “mathematical formulation”.

In the first category, the network converted from today’s deep neural network (DNN) can be converted into a format usable by neuromorphic chips. In addition, the neuromorphic processor itself can be used to create a “direct training” network. Finally, although global communication is required in the neuromorphic processor, the “back propagation” common in CNN can be simulated in the neuromorphic processor.

Loihi is a research chip, not designed for production. It is a 2 billion transistor transistor chip manufactured using Intel’s 14nm CMOS process. Loihi includes a fully asynchronous “neuromorphic multi-core grid, which supports a wide range of sparse, hierarchical and recurrent neural network topologies, and each neuron can communicate with thousands of other neurons.”

Each of these cores includes a learning engine that adjusts parameters during operation. The chip contains 130,000 neurons and 130 million synapses, divided into 128 neuromorphic cores. The chip includes a microcode learning engine for SNN chip training. Loihi chips have been integrated into boards and boxes, containing as many as 100 million neurons in 768 chips.

Now, we are at the intersection of many trends that may form the perfect storm of the processor architecture revolution. First of all, neuromorphic processors are at an inflection point of commercial viability, and they have brought progress equivalent to 10 Moore’s Law nodes (20 years) for certain problems.

Secondly, the traditional DNN is developing rapidly, and has produced related and similar architectural innovations found in neuromorphic processors, which indicates that the characteristics of the two architectural fields may be merged in the future “best of both worlds” architecture.

Third, Moore’s Law is coming to an end, which will put more focus, talent and money into the development of construction methods to promote future technological progress.

Fourth, it will be interesting as the first of these neuromorphic processors gain commercial attention and create a virtuous circle of investment, development, refinement, and deployment. Perhaps within a few years, neuromorphic architecture (or similar derivative technology) will play an important role in our computing infrastructure, and it will rapidly develop to the most cutting-edge new applications that can only be imagined today.

The Links:   1F60A-120F 6DI30AH-050 ELE INSTOCK

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.