There’s an algorithm to simulate our brains. Too bad no computer can run it

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND


Scientists just created an algorithm capable of performing a complete human brain simulation. Now we just have to wait for someone to build a computer powerful enough to run it. The team, comprised of researchers from Germany, Japan, Norway, and Sweden, recently published a white paper detailing the new algorithm, which connects virtual neurons with nodes. It’s designed to simulate the brain’s one billion connections between individual neurons and synapses. A human brain’s neuronal activity is incredibly complex and simulating it at a 1:1 ratio is impossible with current technology. Achieving just a 10 percent simulation rate maxes out the…

This story continues at The Next Web
The Next Web

Cash For Apps: Make money with android app

Future Computers Will Process and Remember Info at the Same Time, Functioning More Like Real Brains

Brain-Like Computers

As much as it might seem like our computers are “thinking” as they perform human-like tasks, like recognizing our faces and predicting what we might say next, they don’t actually function like the human brain — at least not yet. Researchers at Northwestern University’s McCormick School of Engineering have developed a device known as the “memtransistor,” which performs both memory and information processing functions. This makes it remarkably similar to a neuron and unlike a computer, which can only complete these processes separately. The team’s work was recently published in the journal Nature.

An artist's depiction of the memtransistor, which looks something like a gray square computer chip, in between two halves of a "brain." Image Credit: Hersam Research Group
An artist’s depiction of the memtransistor in between two halves of a “brain.” Image Credit: Hersam Research Group

The memtransistor is essentially a combination of a memristor and a transistor. Memristors, or memory resistors, remember the voltage that has been applied to them but can only control a single voltage channel. By transforming such a memristor from a two-terminal to a three-terminal device in the memtransistor, the Northwestern team made this tech much more capable for complex circuits and systems.

Developing an efficient, working neural network that operates like the memtransistor would not only be more brain-like; it might also use less energy than digital computers, as it would eliminate the need to run two separate processes.

Transforming Tech

Study leader Mark C. Hersam clarified in a press release why the abilities of the memtransistor allow it to be more brain-like and effective, explaining: “…in the brain, we don’t usually have one neuron connected to only one other neuron. Instead, one neuron is connected to multiple other neurons to form a network. Our device structure allows multiple contacts, which is similar to the multiple synapses in neurons.”

The researchers believe that it will be relatively simple to scale up this technology for larger, practical use.

“Making dozens of devices, as we have done in our paper, is different than making a billion, which is done with conventional transistor technology today,” Hersam qualified. However, he added that: “Thus far, we do not see any fundamental barriers that will prevent further scale up of our approach.”

Whether this scale-up will actually take place is yet to be seen. But such technology could make the computers and smart devices that we interface with every day smarter and more capable, and even perhaps make them start to feel more organic and even human. It could also allow neural networks to advance and perhaps make futuristic tech like brain-computer interfaces much more possible.

The post Future Computers Will Process and Remember Info at the Same Time, Functioning More Like Real Brains appeared first on Futurism.

Futurism

Brains on a battery: Low-power neural net developed, phones could follow

Low-power neural network developed

Researchers at MIT have paved the way to low-power neural networks that can run on devices such as smartphones and household appliances. Andrew Hobbs explains why this could be so important for connected applications and businesses.

Many scientific breakthroughs are built on concepts found in nature – so-called bio-inspiration – such as the use of synthetic muscle in soft-robotics.

Neural networks are one example of this. They depart from standard approaches to computing by mimicking the human brain. Usually, a large network of neurons is developed, without task-specific programming. This can learn from labelled training data, and apply those lessons to future data sets, gradually improving in performance.

For example, a neural network may be fed a set of images labelled ‘cats’ and from that be able to identify cats in other images, without being told what the defining traits of a cat might be.

But there’s a problem. The neurons are linked to one another, much like synapses in our own brains. These nodes and connections typically have a weight associated with them that adjusts as the network learns, affecting the strength of the signal output and, by extension, the final sum.

As a result, constantly transmitting a signal and passing data across this huge network of nodes requires large amounts of energy, making neural nets unsuited to battery-powered devices, such as smartphones.

As a result, neural network applications such as speech- and face-recognition programs have long relied on external servers to process the data that has been relayed to them, which is itself an energy-intensive process. Even in humanoid robotics, the only route to satisfactory natural language processing has been via services such as IBM’s Watson in the cloud.

A new neural network

All that is set to change, however. Researchers at Massachusetts Institute of Technology (MITT) have developed a chip that increases the speed of neural network computations by three to seven times, while cutting power consumption by up to 95 percent.

This opens up the potential for smart home and mobile devices to host neural networks natively.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” MIT News reports, in an interview with Avishek Biswas, MIT graduate student in electrical engineering and computer science, who led the chip’s development.

Traditionally, neural networks consist of layers of nodes that pass data upwards, one to the next. Each node will multiply the data it receives by the weight of the relevant connection. The outcome of this process is known as a dot product.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption,” said MIT Biswas.

“But the computation these algorithms do can be simplified to one specific operation, the dot product. Our approach was, can we implement this dot-product functionality inside the memory, so that you don’t need to transfer this data back and forth?”

A mind for maths

This process will sometimes occur across millions of nodes. Given that each node weight is stored in memory, this amounts to enormous quantities of data to transfer.

In a human brain, synapses connect whole bundles of neurons, rather than individual nodes. The electrochemical signals that pass across these synapses are modulated to alter the information transmitted.

The MIT chip mimics this process more closely by calculating dot products for 16 nodes at a time. These combined voltages are then converted to a digital signal and stored for further processing, drastically reducing the number of data calls on the memory.

While many networks have numerous possible weights, this new system operates with just two: 1 and -1. This binary system act as a switch within the memory itself, simply closing or opening a circuit. While this seemingly reduces the accuracy of the network, the reality is just a two to three percent loss – perfectly acceptable for many workloads.

Internet of Business says

At a time when edge computing is gaining traction, the ability to bring neural network computation out of the cloud and into everyday devices is an exciting prospect.

We’re still uncovering the vast potential of neural networks, but they’re undoubtedly relevant to mobile devices. We’ve recently seen their ability to predict health risks in fitness trackers, such as Fitbit and Apple Watch.

By allowing this kind of work to take place on mobile devices and wearables – as well as other tasks, such as image classification and language processing – there is huge scope to reduce energy usage.

MIT’s findings also open the door to more complex networks in the future, without having to worry so much about spiralling computational and energy costs.

However, the far-reaching power of abstraction inherent in neural networks comes at the cost of transparency. Their methods may be opaque – so called black box solutions – and we expose ourselves to both the prejudices and the restrictions that may come with limited machine learning models. Not to mention any training data that replicates human bias.

Of course, the same problems, lack of transparency, and bias be found in people too, and we audit companies without having to understand how any individual’s synapses are firing.

But the lesson here is that, when the outcome has significant implications, neural networks should be used alongside more transparent models, where methods can be held to account. Just as critical human decision-making processes must adhere to rules and regulations.

The post Brains on a battery: Low-power neural net developed, phones could follow appeared first on Internet of Business.

Internet of Business

Technology can’t save football players’ brains

Tregg Duerson was 25 years old when his father committed suicide in 2011. A former defensive back for the Chicago Bears, New York Giants and Phoenix Cardinals, David "Dave" Duerson made a career out of being one of the most feared tacklers during his…
Engadget RSS Feed

Scientists Are Closer to Making Artificial Brains That Operate Like Ours Do

The Missing Piece

A new superconducting switch could soon enable computers to make decisions very similarly to the way we do, essentially turning them into artificial brains. One day, this new technology could underpin advanced artificial intelligence (AI) systems that may become part of our everyday life, from transportation to medicine.

Researchers at the U.S. National Institute of Standards and Technology (NIST) explain that, much like a biological brain, the switch “learns” by processing the electrical signals it receives and producing appropriate output signals. The process mirrors the function of biological synapses in the brain, which allow neurons to communicate with each other.

The artificial synapse, which is described in a paper published in Science Advances on Friday, Jan. 26, has the shape of a metallic cylinder and is 10 micrometers (0.0004 inches) wide. It is designed so it can learn through experience — or even from just the surrounding environment.

As is increasingly common in the field of AI, this synthetic switch performs even better than its biological counterpart, using much less energy than our brains do and firing signals much faster than human neurons, 1 billion times per second. For comparison, our synapses fire about 50 times per second. This has a significant impact on processing because the greater the frequency of electric signals that are fired and received, the stronger the connection between the synapses become.

A Human-Like AI

The switch is meant to boost the ability of the so-called “neuromorphic computers” which can support AI that one day could be vital to improving the perception and decision-making abilities of smart devices such as self-driving cars and even cancers diagnostic tools.

The world’s largest car makers are investing in technologies able to replace a human driver, but there is still a long way to go. No matter how safe driverless cars will eventually become, the AI driver will eventually face the moral dilemma of having to decide whether to prioritize the safety of its passengers or others who might be involved in a collision. This switch could equip the artificial brains that make these decisions to have more capacity to deal with these kinds of ethical conundrums.

The switch could also help us develop more accurate AI that can diagnose diseases such as heart conditions and lung cancer. For example, doctors from the John Radcliffe Hospital in Oxford, U.K., have successfully tested an artificial brain that improves the ability of doctors to detect life-threatening heart conditions, and a startup suggested its AI system could catch as many as 4,000 lung cancers per year earlier than human doctors.

While AI could be a game changer in medicine, the conventional computers that run its systems still struggle with tasks such as context recognition. This is because, the NIST researchers say, they don’t keep memories the same way we do. Our brain both processes information and stores memories in synapses at the same time, while computers perform the two tasks separately.

But the new artificial synapse addresses this problem, allowing computers to mimic the human brain. Although it is still being tested, researchers are confident that it may one day power a new generation of artificial brains able to improve on the current capabilities of AI systems.

The post Scientists Are Closer to Making Artificial Brains That Operate Like Ours Do appeared first on Futurism.

Futurism

Researchers Used Virtual Reality to Gain Insight into How Our Brains Assemble Memories

Memory Research

Researchers are using virtual reality (VR) to better explore how human brains assemble memories and organize them in context. In a new study, published in the journal Nature Communications, researchers put human volunteer subjects into a VR experience and then observed the activity in their hippocampuses. Through this experiment, the researchers were able to show that different parts of the hippocampus are activated in response to different types of memories.

Researchers from the University of California, Davis, studied how our brains assemble memories within context of time and space by immersing participants in a VR experience. Afterwards, the scientists used functional magnetic resonance imaging, or fMRI, to observe activity in the hippocampus while the subjects recalled their memories of the experience.

Participants try to recall objects from houses in a VR experience. Image Credit:
Participants try to recall objects from houses in a VR experience. Image Credit:

In the VR experience, the subjects “went” into different houses that had different objects in them. They tried to memorize the objects in two separate contexts — which video and which house. This tested both episodic (video) and spatial (house) memory, which each activated different regions of the hippocampus.

Medical VR

This study allowed the researchers to identify a region of the hippocampus that is involved in recalling shared information about contexts (such as virtual objects that were in the same video) and another distinct area that is involved in remembering differences in context. Additionally, the experiment revealed that hippocampus is involved in episodic memories that link time and space, contradicting the previous thinking that the hippocampus codes mostly for spatial memories.

7 Incredibly Ambitious Virtual Reality Projects
Click to View Full Infographic

This study shows just how widely applicable VR technologies can be to physiological and medical research. There are already VR systems designed to help medical students learn in a more realistic environment, and future operating rooms and hospitals could easily be equipped with VR training tech.

Additionally, VR systems could allow for surgeons to better assess and view the operating area before a procedure is performed. The tech could even be further applied as a diagnostic tool to provide a 3-dimensional, immersive look at patients bodies that could reveal aspects of a diagnosis that might have otherwise gone unseen.

The post Researchers Used Virtual Reality to Gain Insight into How Our Brains Assemble Memories appeared first on Futurism.

Futurism

Physicists Overturn a 100-Year-Old Assumption on How Brains Work

The human brain contains a little over 80-odd billion neurons, each joining with other cells to create trillions of connections called synapses.

The numbers are mind-boggling, but the way each individual nerve cell contributes to the brain’s functions is still an area of contention. A new study has overturned a hundred-year-old assumption on what exactly makes a neuron ‘fire’, posing new mechanisms behind certain neurological disorders.

A team of physicists from Bar-Ilan University in Israel conducted experiments on rat neurons grown in a culture to determine exactly how a neuron responds to the signals it receives from other cells.

To understand why this is important, we need to go back to 1907 when a French neuroscientist named Louis Lapicque proposed a model to describe how the voltage of a nerve cell’s membrane increases as a current is applied.

Once reaching a certain threshold, the neuron reacts with a spike of activity, after which the membrane’s voltage resets.

What this means is a neuron won’t send a message unless it collects a strong enough signal.

Lapique’s equations weren’t the last word on the matter, not by far. But the basic principle of his integrate-and-fire model has remained relatively unchallenged in subsequent descriptions, today forming the foundation of most neuronal computational schemes.

Image credit: NICHD/Flickr

According to the researchers, the lengthy history of the idea has meant few have bothered to question whether it’s accurate.

“We reached this conclusion using a new experimental setup, but in principle these results could have been discovered using technology that has existed since the 1980s,” says lead researcher Ido Kanter.

“The belief that has been rooted in the scientific world for 100 years resulted in this delay of several decades.”

The experiments approached the question from two angles – one exploring the nature of the activity spike based on exactly where the current was applied to a neuron, the other looking at the effect multiple inputs had on a nerve’s firing.

Their results suggest the direction of a received signal can make all the difference in how a neuron responds.

A weak signal from the left arriving with a weak signal from the right won’t combine to build a voltage that kicks off a spike of activity. But a single strong signal from a particular direction can result in a message.

This potentially new way of describing what’s known as spatial summation could lead to a novel method of categorising neurons, one that sorts them based on how they compute incoming signals or how fine their resolution is, based on a particular direction.

Better yet, it could even lead to discoveries that explain certain neurological disorders.

It’s important not to throw out a century of wisdom on the topic on the back of a single study. The researchers also admit they’ve only looked at a type of nerve cell called pyramidal neurons, leaving plenty of room for future experiments.

But fine-tuning our understanding of how individual units combine to produce complex behaviours could spread into other areas of research. With neural networks inspiring future computational technology, identifying any new talents in brain cells could have some rather interesting applications.

This research was published in Scientific Reports.

The post Physicists Overturn a 100-Year-Old Assumption on How Brains Work appeared first on Futurism.

Futurism

Huawei Mate 10 Pro review: Beauty and brains, but a questionable bargain

Huawei isn’t a widely known name in the US market, but that hasn’t stopped the Chinese company from becoming the second largest smartphone maker on the planet. As its fortunes have risen, so has the quality of the hardware. Last year’s Mate 9 was a reliable phone, and Huawei’s revamped Nougat version of Android eliminated many of the pain points from its past devices.

Now, we’ve got the Mate 10 and Mate 10 Pro on the horizon.

Read More

Huawei Mate 10 Pro review: Beauty and brains, but a questionable bargain was written by the awesome team at Android Police.

Android Police – Android News, Apps, Games, Phones, Tablets

Growing Human Mini Brains in Rats Isn’t an Ethical Concern…Yet

If I Only Had a Brain

Thanks to stem cells, scientists are able to create miniature human brains in lab conditions — and now, it’s even possible to grow these organoids in animals. Concerns about the ethical implications of this work have been raised in the past, but they’re set to become even more pressing when previously unpublished research is presented at the annual Society for Neuroscience meeting, beginning November 11, 2017.

Organoids could be a huge boon to research about the brain, as we can use lab-grown gray matter for studies that would be deemed unethical if a living human subject were to be involved. The organoids mirror both physical characteristics and reactions of human brains that are early in development.

The work demonstrates an unexpected interaction between the organoids and the rats and mice that served as their hosts. The brains survived for as long as two months, and were seen to forge connections with the circulatory and nervous systems of the animals. The fact that blood and nerve signals were carried between the implanted cells and the hosts is being touted as an unprecedented advance in this field of study.

However, the fact that organoids have been observed to integrate with host bodies raises further ethical dilemmas. In the past, there have been debates about whether lab-grown brains can be considered conscious. Currently, most would agree that the organoids being produced are not, but there’s a distinct possibility that they could gain consciousness in the future. That process could potentially be hastened if the organoids are being integrated into living animals.

Gray Area

Research into organoids is advancing at a breakneck pace. This is good news, but it means that there’s no time to waste when it comes to addressing the ethical component of what’s coming next.

“We are entering totally new ground here,” said Christof Koch, the president of Seattle’s Allen Institute for Brain Science, in an interview with STAT. “The science is advancing so rapidly, the ethics can’t keep up.”

It’s worth noting that, in a broad sense, this practice is nothing new. Hongjun Song, an adjunct professor at The Solomon H. Snyder Department of Neuroscience at Johns Hopkins, told Futurism that researchers have been transplanting human cells into rodent brains for fifty years. The difference today is that the cells being transplanted are organized into structures.

Song argued that right now, there’s no ethical issue, as there isn’t any evidence that these cells are forming the precise circuitry that’s present in adult or even fetal human brains. However, he does recognize the need for these discussions to prepare for future developments.

“It is not an issue now,” Song wrote in an email. “But we need to have discussions as the science and technology evolve, which could be fast approaching.”

The post Growing Human Mini Brains in Rats Isn’t an Ethical Concern…Yet appeared first on Futurism.

Futurism