Rock-paper-scissors is often a game psychologyreverse psychology, reverse-reverse psychology and randomness. But what if a computer could understand you well enough to win every time? Hokkaido University team and TDK company. (known on cassette tapes), both based in Japanhas developed a chip that can do just that.
Okay, the chip doesn't read your mind. It uses an acceleration sensor placed on your thumb to measure your movement and ultimately determines whether the movements are paper, scissors, or rock. The amazing thing is that once it is trained on your specific gestures, the chip can perform a calculation predicting what you will do in the time it takes for you to say “shoot”, allowing it to defeat you in real time.
The technique behind this feat is called reservoir computing. machine learning a method that uses a complex dynamic system to extract meaningful features from time series data. The idea of layer computing originated in 1990s. With growth artificial intelligencethere has been renewed interest in reservoir calculations due to their comparative low power requirements and its potential for fast training and inference.
According to the research team, the goal was energy consumption. Tomoyuki Sasakidepartment head and senior manager at TDK who worked on the device. “The second goal is the latency issue. extreme AILatency is a huge problem.”
To minimize the power consumption and latency of their installation, the team developed CMOS hardware implementation of an analog reservoir calculation scheme. The team presented their demo version at United Exhibition of Advanced Technologies conference in ChibaJapan, in October and will present their report at International Conference on Rebooting Computers in San Diego this week.
What is strata computing?
The collector computer is best understood in contrast to the traditional neural networksthe basic architecture underlying much of modern AI.
The neural network consists of artificial neuronsarranged in layers. Each layer can be thought of as a column of neurons, where each neuron in a column connects to all neurons in the next column through weighted artificial synapses. Data enters the first column and propagates from left to right, layer by layer, to the last column.
During training, the output of the last layer is compared with the correct answer, and this information is used to adjust the weights at all synapses, this time working backwards, layer by layer, in a process called backpropagation.
This setup has two important features. Firstly, data is transmitted only in one direction – forward. There are no loops. Secondly, all weights connecting any pair of neurons are adjusted during the training process. This architecture has proven to be extremely efficient and flexible, but it is also expensive; adjusting what sometimes turns out to be billions of scales requires both time and effort.
Reservoir computing is also built on the basis of artificial neurons and synapses, but they are designed fundamentally differently. First, there are no layers—neurons are connected to other neurons in a complex, web-like manner with many loops. This gives the network a kind of memory where certain input can continue to be returned.
Secondly, the connections inside the tank are fixed. Data enters the reservoir, propagates through its complex structure, and then connects to the output through a set of terminal synapses. Only this last set of synapses, with their weights, are actually adjusted during training. This approach greatly simplifies the learning process and eliminates the need backpropagation at all.
Given that the reservoir is fixed and the only part that learns is the final layer of “translation” from the reservoir to the desired output, it may seem miraculous that these networks can be useful at all. Yet for certain tasks they have proven extremely effective.
“They are by no means the best model to use in a machine learning toolbox,” says Sanjukta Krishnagopalis an assistant professor of computer science at the University of California, Santa Barbara, who was not involved in the work. But for predicting the time evolution of things that behave chaotically, like the weather, for example, they are the right tool for the job. “This is where reservoir computing shines.”
The reason is that the body of water itself is a bit chaotic. “Your reservoir typically operates on the edge of chaos, which means it can represent a large number of possible states very simply, with a very small neural network,” says Krishnagopal.
Computer with physical reservoir
The artificial synapses inside the reservoir are fixed and backpropagation is not required. This leaves a lot of freedom in the implementation of the reservoir. People have used a variety of means to create physical reservoirs, including light, MEMS devicesand my personal favorite, literally buckets of water.
However, the team from Hokkaido and TDK wanted to create a CMOS-compatible chip that could be used in peripheral devices. To implement the artificial neuron, the team developed an analog circuit assembly. Each node consists of three components: a nonlinear resistor, a MOS-based memory element. capacitorsand a buffer amplifier. Their chip consisted of four cores, each of which consisted of 121 such nodes.
Connecting nodes to each other in the complex repeating patterns required for a tank is difficult. To simplify things, the team settled on what's called a simple loop reservoir, in which all the nodes are connected into one large loop. Previous work suggested that even this relatively simple configuration is capable of simulating a wide range of complex dynamics.
Using this design, the team was able to create a chip that consumed just 20 microwatts of power per core, or 80 microwatts of total power—significantly less than other CMOS-compatible physical reservoir computing designs, the authors say.
Predicting the future
In addition to beating humans in a game of rock-paper-scissors, a chip that calculates reservoirs can predict the next step in a time series in many different areas. “IIf what happens today is influenced by yesterday's data or other past data, it can predict the outcome,» – says Sasaki.
The team demonstrated the chip's capabilities in solving several problems, including predicting the behavior of a famous chaotic system known as logistics map. The team also used the device on a typical real-world example of chaos: the weather. In both test cases, the chip was able to predict the next step with amazing accuracy.
However, forecast accuracy is not the main advantage. The extremely low power consumption and low latency provided by the chip could enable a new set of applications, such as real-time training on the computer. wearable devices and other peripheral devices.
“I think the forecast is actually the same as current technology,” Sasaki says. “However, the power consumption and operating speed are perhaps 10 times better than current AI technology. That's a big difference.”
Articles from your site
Related articles on the Internet





