An Analog Network of Resistors Promises Machine Learning Without a Processor

An Analog Network of Resistors Promises Machine Learning Without a Processor

Researchers from the University of Pennsylvania have come up with an interesting approach to machine learning that could help to address the field’s ever-growing power demands: taking the processor out of the picture and working directly on an analog network of resistors.

“Standard deep learning algorithms require differentiating large non-linear networks, a process that is slow and power-hungry,” the researchers explain. “Electronic learning metamaterials offer potentially fast, efficient, and fault-tolerant hardware for analog machine learning, but existing implementations are linear, severely limiting their capabilities. These systems differ significantly from artificial neural networks as well as the brain, so the feasibility and utility of incorporating non-linear elements have not been explored.”

A network of resistors, with no traditional processor in sight, has shown potential for non-linear machine learning tasks. (📷: Dillavou et al)

Until now, that is. In the team’s research, a non-linear learning metamaterial is introduced — an analog electronic network of resistive elements based on transistors. It’s not a traditional digital processor, and can’t do the tasks a traditional processor can do — but it is tailored specifically to machine learning workloads, and proved able to perform computations that can’t be handled in a linear system without the involvement of a processor beyond an Arduino Due to make measurements and connect to MATLAB.

“Each resistor is simple and kind of meaningless on its own,” physicist Sam Dillavou, first author on the work, explains in an interview with MIT Technology Review. “But when you put them in a network, you can train them to do a variety of things.”

The team has already demonstrated the same core technology being used in an image classification network. and in its latest work extends the concept to non-linear regression and exclusive OR (XOR) operations. Better still, it shows the potential to outperform the traditional approach of throwing the problems at digital processors: “We find our non-linear learning metamaterial reduces modes of training error in order (mean, slope, curvature),” the team claims, “similar to spectral bias in artificial neural networks.”

The network itself has no external memory or traditional processor, but is supervised and measured by an Arduino Due. (📷: Dillavou et al)

“The circuitry is robust to damage,” the researchers continue, “retrainable in seconds, and performs learned tasks in microseconds while dissipating only picojoules of energy across each transistor. This suggests enormous potential for fast, low-power computing in edge systems like sensors, robotic controllers, and medical devices, as well as manufacturability at scale for performing and studying emergent learning.”

There is, of course, a catch: in its current form, existing as a prototype spread across a series of solderless breadboards, the metamaterial system draws around ten times the power of a state-of-the-art digital machine learning accelerator — but as it scales, Dillavou says, the technology should deliver on a promise of increased efficiency and the ability to remove external memory components from the bill of materials.

The team’s work has been published as a preprint on Cornell’s arXiv server.

Main article image courtesy of Felice Macera.

Gareth Halfacree

Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Hacker News – https://www.hackster.io/news/an-analog-network-of-resistors-promises-machine-learning-without-a-processor-researchers-say-d9cb0655b7a0

Exit mobile version