Brains on a battery: Low-power neural net developed, phones could follow

Low-power neural network developed

Researchers at MIT have paved the way to low-power neural networks that can run on devices such as smartphones and household appliances. Andrew Hobbs explains why this could be so important for connected applications and businesses.

Many scientific breakthroughs are built on concepts found in nature – so-called bio-inspiration – such as the use of synthetic muscle in soft-robotics.

Neural networks are one example of this. They depart from standard approaches to computing by mimicking the human brain. Usually, a large network of neurons is developed, without task-specific programming. This can learn from labelled training data, and apply those lessons to future data sets, gradually improving in performance.

For example, a neural network may be fed a set of images labelled ‘cats’ and from that be able to identify cats in other images, without being told what the defining traits of a cat might be.

But there’s a problem. The neurons are linked to one another, much like synapses in our own brains. These nodes and connections typically have a weight associated with them that adjusts as the network learns, affecting the strength of the signal output and, by extension, the final sum.

As a result, constantly transmitting a signal and passing data across this huge network of nodes requires large amounts of energy, making neural nets unsuited to battery-powered devices, such as smartphones.

As a result, neural network applications such as speech- and face-recognition programs have long relied on external servers to process the data that has been relayed to them, which is itself an energy-intensive process. Even in humanoid robotics, the only route to satisfactory natural language processing has been via services such as IBM’s Watson in the cloud.

A new neural network

All that is set to change, however. Researchers at Massachusetts Institute of Technology (MITT) have developed a chip that increases the speed of neural network computations by three to seven times, while cutting power consumption by up to 95 percent.

This opens up the potential for smart home and mobile devices to host neural networks natively.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” MIT News reports, in an interview with Avishek Biswas, MIT graduate student in electrical engineering and computer science, who led the chip’s development.

Traditionally, neural networks consist of layers of nodes that pass data upwards, one to the next. Each node will multiply the data it receives by the weight of the relevant connection. The outcome of this process is known as a dot product.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption,” said MIT Biswas.

“But the computation these algorithms do can be simplified to one specific operation, the dot product. Our approach was, can we implement this dot-product functionality inside the memory, so that you don’t need to transfer this data back and forth?”

A mind for maths

This process will sometimes occur across millions of nodes. Given that each node weight is stored in memory, this amounts to enormous quantities of data to transfer.

In a human brain, synapses connect whole bundles of neurons, rather than individual nodes. The electrochemical signals that pass across these synapses are modulated to alter the information transmitted.

The MIT chip mimics this process more closely by calculating dot products for 16 nodes at a time. These combined voltages are then converted to a digital signal and stored for further processing, drastically reducing the number of data calls on the memory.

While many networks have numerous possible weights, this new system operates with just two: 1 and -1. This binary system act as a switch within the memory itself, simply closing or opening a circuit. While this seemingly reduces the accuracy of the network, the reality is just a two to three percent loss – perfectly acceptable for many workloads.

Internet of Business says

At a time when edge computing is gaining traction, the ability to bring neural network computation out of the cloud and into everyday devices is an exciting prospect.

We’re still uncovering the vast potential of neural networks, but they’re undoubtedly relevant to mobile devices. We’ve recently seen their ability to predict health risks in fitness trackers, such as Fitbit and Apple Watch.

By allowing this kind of work to take place on mobile devices and wearables – as well as other tasks, such as image classification and language processing – there is huge scope to reduce energy usage.

MIT’s findings also open the door to more complex networks in the future, without having to worry so much about spiralling computational and energy costs.

However, the far-reaching power of abstraction inherent in neural networks comes at the cost of transparency. Their methods may be opaque – so called black box solutions – and we expose ourselves to both the prejudices and the restrictions that may come with limited machine learning models. Not to mention any training data that replicates human bias.

Of course, the same problems, lack of transparency, and bias be found in people too, and we audit companies without having to understand how any individual’s synapses are firing.

But the lesson here is that, when the outcome has significant implications, neural networks should be used alongside more transparent models, where methods can be held to account. Just as critical human decision-making processes must adhere to rules and regulations.

The post Brains on a battery: Low-power neural net developed, phones could follow appeared first on Internet of Business.

Internet of Business

Technology can’t save football players’ brains

Tregg Duerson was 25 years old when his father committed suicide in 2011. A former defensive back for the Chicago Bears, New York Giants and Phoenix Cardinals, David "Dave" Duerson made a career out of being one of the most feared tacklers during his…
Engadget RSS Feed

Scientists Are Closer to Making Artificial Brains That Operate Like Ours Do

The Missing Piece

A new superconducting switch could soon enable computers to make decisions very similarly to the way we do, essentially turning them into artificial brains. One day, this new technology could underpin advanced artificial intelligence (AI) systems that may become part of our everyday life, from transportation to medicine.

Researchers at the U.S. National Institute of Standards and Technology (NIST) explain that, much like a biological brain, the switch “learns” by processing the electrical signals it receives and producing appropriate output signals. The process mirrors the function of biological synapses in the brain, which allow neurons to communicate with each other.

The artificial synapse, which is described in a paper published in Science Advances on Friday, Jan. 26, has the shape of a metallic cylinder and is 10 micrometers (0.0004 inches) wide. It is designed so it can learn through experience — or even from just the surrounding environment.

As is increasingly common in the field of AI, this synthetic switch performs even better than its biological counterpart, using much less energy than our brains do and firing signals much faster than human neurons, 1 billion times per second. For comparison, our synapses fire about 50 times per second. This has a significant impact on processing because the greater the frequency of electric signals that are fired and received, the stronger the connection between the synapses become.

A Human-Like AI

The switch is meant to boost the ability of the so-called “neuromorphic computers” which can support AI that one day could be vital to improving the perception and decision-making abilities of smart devices such as self-driving cars and even cancers diagnostic tools.

The world’s largest car makers are investing in technologies able to replace a human driver, but there is still a long way to go. No matter how safe driverless cars will eventually become, the AI driver will eventually face the moral dilemma of having to decide whether to prioritize the safety of its passengers or others who might be involved in a collision. This switch could equip the artificial brains that make these decisions to have more capacity to deal with these kinds of ethical conundrums.

The switch could also help us develop more accurate AI that can diagnose diseases such as heart conditions and lung cancer. For example, doctors from the John Radcliffe Hospital in Oxford, U.K., have successfully tested an artificial brain that improves the ability of doctors to detect life-threatening heart conditions, and a startup suggested its AI system could catch as many as 4,000 lung cancers per year earlier than human doctors.

While AI could be a game changer in medicine, the conventional computers that run its systems still struggle with tasks such as context recognition. This is because, the NIST researchers say, they don’t keep memories the same way we do. Our brain both processes information and stores memories in synapses at the same time, while computers perform the two tasks separately.

But the new artificial synapse addresses this problem, allowing computers to mimic the human brain. Although it is still being tested, researchers are confident that it may one day power a new generation of artificial brains able to improve on the current capabilities of AI systems.

The post Scientists Are Closer to Making Artificial Brains That Operate Like Ours Do appeared first on Futurism.

Futurism

Researchers Used Virtual Reality to Gain Insight into How Our Brains Assemble Memories

Memory Research

Researchers are using virtual reality (VR) to better explore how human brains assemble memories and organize them in context. In a new study, published in the journal Nature Communications, researchers put human volunteer subjects into a VR experience and then observed the activity in their hippocampuses. Through this experiment, the researchers were able to show that different parts of the hippocampus are activated in response to different types of memories.

Researchers from the University of California, Davis, studied how our brains assemble memories within context of time and space by immersing participants in a VR experience. Afterwards, the scientists used functional magnetic resonance imaging, or fMRI, to observe activity in the hippocampus while the subjects recalled their memories of the experience.

Participants try to recall objects from houses in a VR experience. Image Credit:
Participants try to recall objects from houses in a VR experience. Image Credit:

In the VR experience, the subjects “went” into different houses that had different objects in them. They tried to memorize the objects in two separate contexts — which video and which house. This tested both episodic (video) and spatial (house) memory, which each activated different regions of the hippocampus.

Medical VR

This study allowed the researchers to identify a region of the hippocampus that is involved in recalling shared information about contexts (such as virtual objects that were in the same video) and another distinct area that is involved in remembering differences in context. Additionally, the experiment revealed that hippocampus is involved in episodic memories that link time and space, contradicting the previous thinking that the hippocampus codes mostly for spatial memories.

7 Incredibly Ambitious Virtual Reality Projects
Click to View Full Infographic

This study shows just how widely applicable VR technologies can be to physiological and medical research. There are already VR systems designed to help medical students learn in a more realistic environment, and future operating rooms and hospitals could easily be equipped with VR training tech.

Additionally, VR systems could allow for surgeons to better assess and view the operating area before a procedure is performed. The tech could even be further applied as a diagnostic tool to provide a 3-dimensional, immersive look at patients bodies that could reveal aspects of a diagnosis that might have otherwise gone unseen.

The post Researchers Used Virtual Reality to Gain Insight into How Our Brains Assemble Memories appeared first on Futurism.

Futurism

Physicists Overturn a 100-Year-Old Assumption on How Brains Work

The human brain contains a little over 80-odd billion neurons, each joining with other cells to create trillions of connections called synapses.

The numbers are mind-boggling, but the way each individual nerve cell contributes to the brain’s functions is still an area of contention. A new study has overturned a hundred-year-old assumption on what exactly makes a neuron ‘fire’, posing new mechanisms behind certain neurological disorders.

A team of physicists from Bar-Ilan University in Israel conducted experiments on rat neurons grown in a culture to determine exactly how a neuron responds to the signals it receives from other cells.

To understand why this is important, we need to go back to 1907 when a French neuroscientist named Louis Lapicque proposed a model to describe how the voltage of a nerve cell’s membrane increases as a current is applied.

Once reaching a certain threshold, the neuron reacts with a spike of activity, after which the membrane’s voltage resets.

What this means is a neuron won’t send a message unless it collects a strong enough signal.

Lapique’s equations weren’t the last word on the matter, not by far. But the basic principle of his integrate-and-fire model has remained relatively unchallenged in subsequent descriptions, today forming the foundation of most neuronal computational schemes.

Image credit: NICHD/Flickr

According to the researchers, the lengthy history of the idea has meant few have bothered to question whether it’s accurate.

“We reached this conclusion using a new experimental setup, but in principle these results could have been discovered using technology that has existed since the 1980s,” says lead researcher Ido Kanter.

“The belief that has been rooted in the scientific world for 100 years resulted in this delay of several decades.”

The experiments approached the question from two angles – one exploring the nature of the activity spike based on exactly where the current was applied to a neuron, the other looking at the effect multiple inputs had on a nerve’s firing.

Their results suggest the direction of a received signal can make all the difference in how a neuron responds.

A weak signal from the left arriving with a weak signal from the right won’t combine to build a voltage that kicks off a spike of activity. But a single strong signal from a particular direction can result in a message.

This potentially new way of describing what’s known as spatial summation could lead to a novel method of categorising neurons, one that sorts them based on how they compute incoming signals or how fine their resolution is, based on a particular direction.

Better yet, it could even lead to discoveries that explain certain neurological disorders.

It’s important not to throw out a century of wisdom on the topic on the back of a single study. The researchers also admit they’ve only looked at a type of nerve cell called pyramidal neurons, leaving plenty of room for future experiments.

But fine-tuning our understanding of how individual units combine to produce complex behaviours could spread into other areas of research. With neural networks inspiring future computational technology, identifying any new talents in brain cells could have some rather interesting applications.

This research was published in Scientific Reports.

The post Physicists Overturn a 100-Year-Old Assumption on How Brains Work appeared first on Futurism.

Futurism