Researchers have developed a neuro-inspired analog computer that has the ability to train itself to become better at whatever tasks it performs. Experimental tests have shown that the new system, which is based on the artificial intelligence algorithm known as “reservoir computing,” not only performs better at solving difficult computing tasks than experimental reservoir computers that do not use the new algorithm, but it can also tackle tasks that are so challenging that they are considered beyond the reach of traditional reservoir computing.

The results highlight the potential advantages of self-learning hardware for performing complex tasks, and also support the possibility that self-learning systems—with their potential for high energy-efficiency and ultrafast speeds—may provide an extension to the anticipated end of Moore’s law.

The researchers, Michiel Hermans, Piotr Antonik, Marc Haelterman, and Serge Massar at the Université Libre de Bruxelles in Brussels, Belgium, have published a paper on the self-learning hardware in a recent issue of Physical Review Letters.

“On the one hand, over the past decade there has been remarkable progress in artificial intelligence, such as spectacular advances in image recognition, and a computer beating the human Go world champion for the first time, and this progress is largely based on the use of error backpropagation,” Antonik told Phys.org. “On the other hand, there is growing interest, both in academia and industry (for example, by IBM and Hewlett Packard) in analog, brain-inspired computing as a possible route to circumvent the end of Moore’s law.

Read more


NEWSLETTER SIGN UP

Get the latest breaking news & specials from Alex Jones and the Infowars Crew.

Related Articles


Comments