Researchers Have Developed a Potential Blood Test for Autism

First of Their Kind

Researchers at the University of Warwick have developed two tests that could potentially detect autism in children. Both tests, one blood, and one urine are based on a previously discovered link between damage to proteins in blood plasma and autism. The team believes the tests to be the first of their kind, and hope that they could help improve early detection of autism spectrum disorders (ASD).

The study, published in the journal Molecular Autism, confirmed previous research that had linked certain mutations in amino acid transporters with ASD. Since proteins in blood plasma can be damaged by two processes, oxidation and glycation, and the researchers developed tests that can detect that damage.

Armed with this knowledge and using the most reliable of the tests they developed, the team took urine and blood samples from 38 children with ASD, as well as a control group of 31 children who had not been diagnosed with ASD. With the help of an artificial intelligence (AI)-developed algorithm, the team figured out how the two groups were chemically different.

“With further testing we may reveal specific plasma and urinary profiles or “fingerprints” of compounds with damaging modifications,”, Reader of Experimental Systems Biology at the University of Warwick and the research team’s lead. “This may help us improve the diagnosis of ASD and point the way to new causes of ASD.”

Researchers still do not completely understand why people develop autism. About 30-35% of cases of ASD are linked to genetic variants, but there is no exact formula for predicting autism. As with many other conditions, genetics, environment, and other factors all play a role. In recent years, there’s even been evidence proposed that gut bacteria could indicate whether or not a person has an ASD.

Finding biomarkers for ASD wouldn’t be far off what the team from Warwick has accomplished, as their research demonstrated that measuring protein damage could be a reliable indicator of whether or not a child has ASD.

“Our discovery could lead to earlier diagnosis and intervention,” said Rabbani, “We hope the tests will also reveal new causative factors.”

ASD cases are characterized by a wide variety of symptoms that can range from mild behavioral issues to debilitating compulsive behavior, anxiety, cognitive impairment, and much more. Because its symptoms are so varied and the causes aren’t yet fully understand, diagnosis and treatment can be an arduous journey.

If tests can be developed that allow families to receive a diagnosis sooner, it will give them the ability to seek intervention earlier, too. Which can be essential for helping kids with ASD, and their families, navigate the world and improve their quality of life.

The post Researchers Have Developed a Potential Blood Test for Autism appeared first on Futurism.

Futurism

Brains on a battery: Low-power neural net developed, phones could follow

Low-power neural network developed

Researchers at MIT have paved the way to low-power neural networks that can run on devices such as smartphones and household appliances. Andrew Hobbs explains why this could be so important for connected applications and businesses.

Many scientific breakthroughs are built on concepts found in nature – so-called bio-inspiration – such as the use of synthetic muscle in soft-robotics.

Neural networks are one example of this. They depart from standard approaches to computing by mimicking the human brain. Usually, a large network of neurons is developed, without task-specific programming. This can learn from labelled training data, and apply those lessons to future data sets, gradually improving in performance.

For example, a neural network may be fed a set of images labelled ‘cats’ and from that be able to identify cats in other images, without being told what the defining traits of a cat might be.

But there’s a problem. The neurons are linked to one another, much like synapses in our own brains. These nodes and connections typically have a weight associated with them that adjusts as the network learns, affecting the strength of the signal output and, by extension, the final sum.

As a result, constantly transmitting a signal and passing data across this huge network of nodes requires large amounts of energy, making neural nets unsuited to battery-powered devices, such as smartphones.

As a result, neural network applications such as speech- and face-recognition programs have long relied on external servers to process the data that has been relayed to them, which is itself an energy-intensive process. Even in humanoid robotics, the only route to satisfactory natural language processing has been via services such as IBM’s Watson in the cloud.

A new neural network

All that is set to change, however. Researchers at Massachusetts Institute of Technology (MITT) have developed a chip that increases the speed of neural network computations by three to seven times, while cutting power consumption by up to 95 percent.

This opens up the potential for smart home and mobile devices to host neural networks natively.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” MIT News reports, in an interview with Avishek Biswas, MIT graduate student in electrical engineering and computer science, who led the chip’s development.

Traditionally, neural networks consist of layers of nodes that pass data upwards, one to the next. Each node will multiply the data it receives by the weight of the relevant connection. The outcome of this process is known as a dot product.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption,” said MIT Biswas.

“But the computation these algorithms do can be simplified to one specific operation, the dot product. Our approach was, can we implement this dot-product functionality inside the memory, so that you don’t need to transfer this data back and forth?”

A mind for maths

This process will sometimes occur across millions of nodes. Given that each node weight is stored in memory, this amounts to enormous quantities of data to transfer.

In a human brain, synapses connect whole bundles of neurons, rather than individual nodes. The electrochemical signals that pass across these synapses are modulated to alter the information transmitted.

The MIT chip mimics this process more closely by calculating dot products for 16 nodes at a time. These combined voltages are then converted to a digital signal and stored for further processing, drastically reducing the number of data calls on the memory.

While many networks have numerous possible weights, this new system operates with just two: 1 and -1. This binary system act as a switch within the memory itself, simply closing or opening a circuit. While this seemingly reduces the accuracy of the network, the reality is just a two to three percent loss – perfectly acceptable for many workloads.

Internet of Business says

At a time when edge computing is gaining traction, the ability to bring neural network computation out of the cloud and into everyday devices is an exciting prospect.

We’re still uncovering the vast potential of neural networks, but they’re undoubtedly relevant to mobile devices. We’ve recently seen their ability to predict health risks in fitness trackers, such as Fitbit and Apple Watch.

By allowing this kind of work to take place on mobile devices and wearables – as well as other tasks, such as image classification and language processing – there is huge scope to reduce energy usage.

MIT’s findings also open the door to more complex networks in the future, without having to worry so much about spiralling computational and energy costs.

However, the far-reaching power of abstraction inherent in neural networks comes at the cost of transparency. Their methods may be opaque – so called black box solutions – and we expose ourselves to both the prejudices and the restrictions that may come with limited machine learning models. Not to mention any training data that replicates human bias.

Of course, the same problems, lack of transparency, and bias be found in people too, and we audit companies without having to understand how any individual’s synapses are firing.

But the lesson here is that, when the outcome has significant implications, neural networks should be used alongside more transparent models, where methods can be held to account. Just as critical human decision-making processes must adhere to rules and regulations.

The post Brains on a battery: Low-power neural net developed, phones could follow appeared first on Internet of Business.

Internet of Business

Human Eggs Developed to Maturity in the Lab for the First Time

For the first time, scientists have successfully taken human eggs from their earliest stages to maturity in a lab setting. This accomplishment is set to give us new insight into how human eggs develop, and it could potentially offer a compelling new option to individuals who are at risk of fertility loss.

For the study, researchers at the University of Edinburgh took ovarian tissue from 10 people in their late 20s and 30s. Using various nutrients, they encouraged eggs to develop to maturity, the point at which they could be fertilized. A total of 48 eggs reached the final stage of the process, and of those, nine reached full maturity.

Currently, individuals at risk of infertility due to radiotherapy or chemotherapy can have ovarian tissue removed ahead of treatment and re-implanted at a later date. For young people who haven’t yet gone through puberty and aren’t yet producing eggs, this is the only option for preserving fertility, Evelyn Telfer, co-author of the research, told The Guardian.

That process raises concerns that re-implanting tissue taken prior to cancer treatment might reintroduce cancer cells into an individual’s body. The new procedure alleviates those concerns because instead of implanting tissue, the doctor would implant an embryo, according to Telfer.

Researchers still have much more work to do before this procedure could be used in practice. At the very least, it will take a number of years to ensure that the mature eggs produced are healthy.

According to the researchers, the eggs they grew developed faster than they would have in the body, which begs further investigation. Moreover, a small cell known as a polar body grew to an unusually large size during the process, which could indicate developmental abnormalities. The team wants to attempt to fertilize the eggs, so it can perform tests on the embryos.

Still, this is a major milestone in fertility research, and it could give new hope to those who may not have had any before.

The post Human Eggs Developed to Maturity in the Lab for the First Time appeared first on Futurism.

Futurism

Google Assistant now understands Hindi, Actions on Google can be developed in Russian (ru-RU)

Google Assistant’s language support is beyond confusing. I’ve been covering it for over 6 months now and I still don’t understand why a language could work in one version of Assistant but not another. Take Hindi for example. It works in Allo, but not in other instances of Assistant like the main one on your phone. That’s changing though now.

If you have your phone’s language set to English – India, you’ll be able to activate Assistant by tapping and holding on the Home button on any Android 5.0 and above phone (tablets probably won’t work as they only support US English for now).

Read More

Google Assistant now understands Hindi, Actions on Google can be developed in Russian (ru-RU) was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

NASA Has Developed Autonomous Space Navigation That Uses Pulsars

X-Ray Navigation

NASA may have just improved our potential for deep space exploration by inventing a new type of autonomous space navigation. Known as Station Explorer for X-Ray Timing and Navigation Technology, or SEXTANT, the technology uses pulsars — rotating neutron stars that emit electromagnetic radiation — to determine the location of objects in space.

The way SEXTANT uses pulsars has been compared to how GPS navigation can provide drivers with positioning and accurate navigation using satellites orbiting around Earth. The pulsars SEXTANT uses are best observed in the X-ray spectrum, in which their beams of radiation essentially turn them into lighthouses.

To show that SEXTANT is an idea worth building on, a team of NASA engineers demonstrated the technology’s ability to locate NASA’s Neutron-star Interior Composition Explorer, or NICER. NICER — an observatory roughly the size of a washing machine — is currently orbiting Earth while attached to the International Space Station. It has been tasked with studying both neutron stars and pulsars, making it the perfect partner for SEXTANT’s first experiment.

An illustration of NICER attached to the International Space Station. Image Credit: NASA
An illustration of NICER attached to the International Space Station. Image Credit: NASA

“This demonstration is a breakthrough for future deep space exploration,” said Jason Mitchell, SEXTANT Project Manager in a NASA press release. “As the first to demonstrate X-ray navigation fully autonomously and in real-time in space, we are now leading the way.”

During November, NASA directed NICER to take readings from four specific pulsars using its 52 X-ray telescopes and silicon-drift detectors over two days. NICER then fed the information it got from the pulsars to SEXTANT. Within eight hours, SEXTANT was able to autonomously determine NICER’s location in Earth’s orbit within a 10-mile radius. SEXTANT’s readings were compared to NICER’s own onboard GPS receiver, confirming its accuracy.

“This was much faster than the two weeks we allotted for the experiment,” said SEXTANT System Architect Luke Winternitz in the press release. “We had indications that our system would work, but the weekend experiment finally demonstrated the system’s ability to work autonomously.”

Navigating Deep Space

SEXTANT is far from being complete, however, and NASA predicts it will be several years before a better version autonomous space navigation comes along. When it does, the tech will fill a huge need for space exploration. While GPS is fine for Earth and low-Earth orbit, its signal weakens the further away an object is from GPS satellites. As such, NASA’s X-ray navigation will be required for spacecraft sent far beyond Earth.

“This successful demonstration firmly establishes the viability of X-ray pulsar navigation as a new autonomous navigation capability,” Mitchell added in the press release. “We have shown that a mature version of this technology could enhance deep-space exploration anywhere within the solar system and beyond.”

With the initial experiment out of the way, NASA intends to improve the system’s flight and ground software for a second demonstration scheduled for later this year. Before SEXTANT can be considered for full-scale operations, however, NASA engineers must increase the sensitivity of its instruments while at the same time decreasing its size, weight, and power consumption.

NASA believes the autonomous space navigation could eventually be used during human spaceflight missions, or calculate position if used on missions to Jupiter, Saturn, or their respective moons.

The post NASA Has Developed Autonomous Space Navigation That Uses Pulsars appeared first on Futurism.

Futurism