Android P feature spotlight: Google improves neural network API for machine learning and AI developers

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Last year, Google introduced a new neural networks API in Android 8.1 Oreo that provided developers with hardware-backed tools for machine learning. Now, with Android P, Google is expanding the API to support nine new operations. Pixel 2 devices will also have support for Qualcomm’s Hexagon HVX driver, giving developers further improvements in performance on those devices. 

At the time, Google’s neural network API supported on-device model creation, compilation, and execution, meaning you could not only build a model as required on the device, but you could also run it.

Read More

Android P feature spotlight: Google improves neural network API for machine learning and AI developers was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

Cash For Apps: Make money with android app

Services firm integrates neural networking into B2B platform

Swedish technology firm IAR Systems, which makes tools for software developers, has integrated ARM’s new machine learning technology into its B2B platform.

Developers using the IAR Embedded Workbench platform can now access ARM’s neural network kernels, it said.

Called Microcontroller Software Interface Standard (CMSIS), the system has been developed specifically for companies that use ARM Cortex-M processors.

The firm said the technology lets multicontroller developers simplify software reuse, reduce complex processes, and speed up the time it takes to bring new products to market.

CMSIS-NN is a powerful development tool that is presented as a library of neural network kernels. According to IAR, these can “maximise the performance and minimise the memory footprint of neural networks on ARM Cortex-M processor cores”.

Rise of edge tech

With the library, tech companies have an easier and quicker way to develop IoT edge devices. The use of neural networks is growing, in part due to their joint ability to improve efficiency and slash power consumption.

ARM’s machine learning technology is now a core part of AIR Embedded Workbench, which is described as a development toolchain for the Cortex-M series of microcontrollers.

Supporting more than 5,000 ARM devices, the system offers debugging and analysis tools for developers working on lower-power applications.

Machine versus man

Anders Lundgren, product manager of IAR Systems, said machine learning can help companies tap into the potential offered by IoT devices. “Neural networks and machine learning brings exciting new possibilities for embedded developers to move intelligent decisions down to the IoT devices,” he said.

“Developers making use of the powerful features of IAR Embedded Workbench and the ARM CMSIS-NN library will be able to use and maximise the power of embedded neural networks on microcontroller-based IoT edge devices.”

Tim Hartley, product manager of the machine learning group at ARM, said it has developed its latest machine learning technologies to give much-needed support to developers.

“ARM is committed to enabling industry-leading neural network frameworks and supporting leading toolchains, such as IAR Embedded Workbench, for optimising machine-learning applications on the smallest IoT edge devices,” he said.

“Deploying the CMSIS-NN libraries enables developers to achieve up to five times performance and efficiency improvements on Cortex-M processors for machine learning applications.”

Internet of Business says

The edge environment is emerging as a critical space in the development and spread of IoT devices and services, especially where embedded intelligence, modular and/or reusable technology, and lower power consumption are concerned.

High speed, low cost, smart: the targets that all developers need to hit.

Read more: Why you could soon have a neural network on your smartphone

Read more: Intel Movidius Stick puts neural AI on the edge

 

The post Services firm integrates neural networking into B2B platform appeared first on Internet of Business.

Internet of Business

MIT’s new chip could bring neural nets to battery-powered gadgets

The Best Guide To Selling Your Old Phones With High Profit

 MIT researchers have developed a chip designed to speed up the hard work of running neural networks, while also reducing the power consumed when doing so dramatically – by up to 95 percent, in fact. The basic concept involves simplifying the chip design so that shuttling of data between different processors on the same chip is taken out of the equation. The big advantage of this new… Read More
Mobile – TechCrunch

Brains on a battery: Low-power neural net developed, phones could follow

Low-power neural network developed

Researchers at MIT have paved the way to low-power neural networks that can run on devices such as smartphones and household appliances. Andrew Hobbs explains why this could be so important for connected applications and businesses.

Many scientific breakthroughs are built on concepts found in nature – so-called bio-inspiration – such as the use of synthetic muscle in soft-robotics.

Neural networks are one example of this. They depart from standard approaches to computing by mimicking the human brain. Usually, a large network of neurons is developed, without task-specific programming. This can learn from labelled training data, and apply those lessons to future data sets, gradually improving in performance.

For example, a neural network may be fed a set of images labelled ‘cats’ and from that be able to identify cats in other images, without being told what the defining traits of a cat might be.

But there’s a problem. The neurons are linked to one another, much like synapses in our own brains. These nodes and connections typically have a weight associated with them that adjusts as the network learns, affecting the strength of the signal output and, by extension, the final sum.

As a result, constantly transmitting a signal and passing data across this huge network of nodes requires large amounts of energy, making neural nets unsuited to battery-powered devices, such as smartphones.

As a result, neural network applications such as speech- and face-recognition programs have long relied on external servers to process the data that has been relayed to them, which is itself an energy-intensive process. Even in humanoid robotics, the only route to satisfactory natural language processing has been via services such as IBM’s Watson in the cloud.

A new neural network

All that is set to change, however. Researchers at Massachusetts Institute of Technology (MITT) have developed a chip that increases the speed of neural network computations by three to seven times, while cutting power consumption by up to 95 percent.

This opens up the potential for smart home and mobile devices to host neural networks natively.

“The general processor model is that there is a memory in some part of the chip, and there is a processor in another part of the chip, and you move the data back and forth between them when you do these computations,” MIT News reports, in an interview with Avishek Biswas, MIT graduate student in electrical engineering and computer science, who led the chip’s development.

Traditionally, neural networks consist of layers of nodes that pass data upwards, one to the next. Each node will multiply the data it receives by the weight of the relevant connection. The outcome of this process is known as a dot product.

“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption,” said MIT Biswas.

“But the computation these algorithms do can be simplified to one specific operation, the dot product. Our approach was, can we implement this dot-product functionality inside the memory, so that you don’t need to transfer this data back and forth?”

A mind for maths

This process will sometimes occur across millions of nodes. Given that each node weight is stored in memory, this amounts to enormous quantities of data to transfer.

In a human brain, synapses connect whole bundles of neurons, rather than individual nodes. The electrochemical signals that pass across these synapses are modulated to alter the information transmitted.

The MIT chip mimics this process more closely by calculating dot products for 16 nodes at a time. These combined voltages are then converted to a digital signal and stored for further processing, drastically reducing the number of data calls on the memory.

While many networks have numerous possible weights, this new system operates with just two: 1 and -1. This binary system act as a switch within the memory itself, simply closing or opening a circuit. While this seemingly reduces the accuracy of the network, the reality is just a two to three percent loss – perfectly acceptable for many workloads.

Internet of Business says

At a time when edge computing is gaining traction, the ability to bring neural network computation out of the cloud and into everyday devices is an exciting prospect.

We’re still uncovering the vast potential of neural networks, but they’re undoubtedly relevant to mobile devices. We’ve recently seen their ability to predict health risks in fitness trackers, such as Fitbit and Apple Watch.

By allowing this kind of work to take place on mobile devices and wearables – as well as other tasks, such as image classification and language processing – there is huge scope to reduce energy usage.

MIT’s findings also open the door to more complex networks in the future, without having to worry so much about spiralling computational and energy costs.

However, the far-reaching power of abstraction inherent in neural networks comes at the cost of transparency. Their methods may be opaque – so called black box solutions – and we expose ourselves to both the prejudices and the restrictions that may come with limited machine learning models. Not to mention any training data that replicates human bias.

Of course, the same problems, lack of transparency, and bias be found in people too, and we audit companies without having to understand how any individual’s synapses are firing.

But the lesson here is that, when the outcome has significant implications, neural networks should be used alongside more transparent models, where methods can be held to account. Just as critical human decision-making processes must adhere to rules and regulations.

The post Brains on a battery: Low-power neural net developed, phones could follow appeared first on Internet of Business.

Internet of Business

ARM announces Project Trillium machine learning and neural network IPs

ARM today announced the Project Trillium IP including the new highly scalable processors that are capable of delivering enhanced machine learning and neural network functionality. These new technologies are focused on the mobile market which will enable a new class of ML-equipped devices with advanced computing capabilities and object detection. The Project Trillium is a group of software solutions … Continue reading “ARM announces Project Trillium machine learning and neural network IPs”
Fone Arena

This neural network wants to be your Valentine… we think


Research scientist Janelle Shane came up with a novel way to woo your tech-savvy partner this Valentine’s Day. Shane collected all the phrases from the popular Valentine’s heart candy — the one with messages like “Love You” or “Be Mine” — and fed them into a machine learning algorithm. The algorithm strips the word of meaning — it really only understands characters, not context — and begins to spot patterns in the dataset. I trained a neural network to generate new candy heart messages – some more successful than others. https://t.co/tPZyGV3upg pic.twitter.com/9J8HjiMshV — Janelle Shane (@JanelleCShane) February 9, 2018 After…

This story continues at The Next Web
The Next Web

Twitter is using neural networks for smart auto-cropping of Images

Twitter feed doesn’t show images in full on the news feed, and the company thinks that it can be challenging in rendering a consistent UI experience and the photos more often than not are framed awkwardly. Today Twitter is solving the problem by using neural networks for smart auto-cropping of Images. The company previously used face detection to focus the view on the most prominent faces in the pictures which had its own limitations when presenting images. Now with neural networks, Twitter will focus on “salient” image regions. In general, people tend to pay more attention to faces, text, animals, but also other objects and regions of high contrast. This data can be used to train a neural network to identify what people might want to look at. The basic idea behind this is to use these predictions of the neural network to center a crop around the most interesting region. However, doing a pixel-level saliency analysis of all the pictures uploaded to Twitter can be a lengthy process so to address the concern, engineers developed a smaller, faster neural network that can identify the gist of the image. Secondly, Twitter is said to have developed a pruning technique to iteratively remove feature maps of the neural network …
Fone Arena

Twitter is using neural networks to improve photo cropping

Twitter doesn’t show full photos when they appear in the stream—you need to tap to expand the whole image. Unfortunately, the cropped version of the photo is often framed awkwardly because it’s just the middle section of the image. Twitter is solving that problem with a neural network that can understand the composition of your images.

The neural network is looking for so-called “salient” image regions. Scientists have studied what people consider salient in images for years using eye-tracking technology.

Read More

Twitter is using neural networks to improve photo cropping was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

Google Introduces AIY Vision Kit with on-device neural network acceleration for Raspberry Pi

Google has introduced the AIY Voice Kit back in May, and today the company has launched the AIY Vision Kit that has on-device neural network acceleration for Raspberry Pi. The Vision kit includes a new circuit board, and computer vision software can be paired with Raspberry Pi computer and camera. In addition to the Vision, users will … Continue reading “Google Introduces AIY Vision Kit with on-device neural network acceleration for Raspberry Pi”
Fone Arena