Let’s talk about machine learning at the edge

ARM believes its architecture for object detection could find its way into everything from cameras to dive masks. Slide courtesy of ARM.

You can’t hop on an earnings call or pick up a connected product these days without hearing something about AI or machine learning. But as much hype as there is, we are also on the verge of a change in computing that’s as profound as the shift to mobile was a little over a decade ago. In the last few years, the results of that shift have started to emerge.

In 2015, I started writing about how graphics cores—like the ones Nvidia and AMD make —were changing the way companies were training neural networks for machine learning. A huge component of the improvements in computer vision, natural language processing, and real-time translation efforts have been due to the impressive parallel processing graphics processors have.

Even before that, however, I was asking the folks at Qualcomm, Intel, and ARM how they planned to handle the move toward machine learning, both in the cloud and at the edge. For Intel, this conversation felt especially relevant, since it had completely missed the transition to mobile computing and had also failed to develop a new GPU that could handle massively parallel workloads.

Some of these conversations were held in 2013 and 2014. That’s how long the chip vendors have been thinking about the computing needs for machine learning. Yet it took ARM until 2016 to purchase a company with expertise in computer vision, Apical, and only this week did it deliver on a brand-new architecture for machine learning at low power.

Intel bought its way into this space with the acquisition of Movidius and Nervana Systems in 2016. I still don’t know what Qualcomm is doing, but executives there have told me that its experience in mobile means it has an advantage in the internet of things. Separately, in a conference call dedicated to talking about the new Trillium architecture, an ARM executive said that part of the reason for the wait was a need to see which workloads people wanted to run on these machine learning chips.

The jobs that have emerged in this space appear to focus on computer vision, object recognition and detection, natural language processing, and hierarchical activation. Hierarchical activation is where a low-power chip might recognize that a condition is met and then wake a more powerful chip to provide necessary reaction to that condition.

But while the traditional chip vendors were waiting for the market to tell them what it wanted, the big consumer hardware vendors, including Google, Apple, Samsung—and even Amazon—were building their own chip design teams with an eye to machine learning. Google has focused primarily on the cloud with its Tensor Flow Processing Units, although it did develop a special chip for image processing for its Pixel mobile phones. Amazon is building a chip for its consumer hardware using tech from its acquisition of Annapurna Labs in 2015 and its purchase of Blink’s low-power video processing chips back in December.

Some of this technology is designed for smartphones, such as Google’s visual processing core. Even Apple’s chips are finding their way into new devices (the HomePod caries an Apple A8 chip, which first appeared in Apple’s iPhone 6). But others, like the Movidius silicon, use a design that’s made for connected devices like drones or cameras.

The next step in machine learning for the edge will be to build silicon that’s specific for the internet of things. These devices, like ARM’s, will focus on machine learning with incredibly reduced power consumption. Right now, the training of neural networks happens mostly in the cloud and requires massively parallel processing as well as super-fast I/O. Think of I/O as how quickly the chip can move data around between its memory and the processing cores.

But all of that is an expensive power proposition at the edge, which is why most edge machine learning jobs are just the execution of an already established model, or what is called inference. Even in inference, power consumption can be reduced with careful designs. Qualcomm makes an image sensor that that requires less than 2 milliwatts of power, and can run roughly three to five computer vision models for object detection.

But inference might also include some training, thanks to silicon and even better machine learning models. Movidius and ARM are both aiming to let some of their chips actually train at the edge. This could help devices in the home setting learn new wake words for voice control or, in an industrial setting, be used to build models for anomalous event detection.

All of which could have a tremendous impact on privacy and the speed of improvement in connected devices. If a machine can learn without sending data to the cloud, then that data could stay resident on the device itself, under user control. For Apple, this could be a game-changing improvement to its phones and its devices, such as the HomePod. For Amazon, it could lead to a host of new features that are hard-coded in the silicon itself.

For Amazon in particular, this could even raise a question about its future business opportunities. If Amazon produces a good machine learning chip for its Alexa-powered devices, would it share it with other hardware makers seeking to embrace its voice ecosystem, in effect turning Amazon into a chip provider? Apple and Google likely won’t share. And Samsung’s chip business is for its gear and others, so I’d expect its edge machine learning chips to find their way into the world of non-Samsung devices.

For the last decade, custom silicon has been a competitive differentiator for tech giants. What if, thanks to machine learning and the internet of things, it becomes a foothold for a developing ecosystem of smart devices?

Stacey on IoT | Internet of Things news and analysis

AI Smartphones Will Soon Be Standard, Thanks to Machine Learning Chip

AI Built In

Almost every major player in the smartphone industry now says that their devices use the power of artificial intelligence (AI), or more specifically, machine learning algorithms. Few devices, however, run their own AI software. That might soon change: thanks to a processor dedicated to machine learning for mobile phones and other smart-home devices, AI smartphones could one day be standard.

British chip design firm ARM, the company behind virtually every chip in today’s smartphones, now wants to put the power of AI into every mobile device. Currently, devices that run AI algorithms depend on servers in the cloud. It’s a rather limited set up, with online connectivity affecting how information is sent back and forth.

Project Trillium would make this process much more efficient. Their built-in AI chip would allow devices to continue running machine learning algorithms even when offline. This reduces data traffic and speeds up processing, while also saving power.

“We analyze compute workloads, work out which bits are taking the time and the power, and look to see if we can improve on our existing processors,” Jem Davies, ARM’s machine learning group head, told the MIT Technology Review. Running machine learning algorithms locally would also mean fewer chances of data slipping through.

A Staple for Mobile Phones

With the advantages machine learning brings to mobile devices, it’s hard not to see this as the future of mobile computing. ARM, however, isn’t exactly the first in trying to make this happen. Apple has already designed and built a “neural engine” as part of the iPhone X’s main chipset, to handle the phone’s artificial neural networks for images and speech processing.

Google’s own chipset, for their Pixel 2 smartphone, does something similar. Huawei’s Mate 10 packs a neural processing unit developed by the Chinese smartphone maker. Amazon might follow soon with its own AI chips for Alexa.

A diagram showing how Project Trillium will develop chips for AI smartphones, beginning with ground-up design, progressing to uplift from processors, and enabled by open-source software, ending in a processor that targets the mobile market.
Image credit: ARM

The MIT Tech Review notes, however, that ARM’s track record for energy-efficient mobile processors could translate to a more widespread adoption of their AI chip. ARM doesn’t actually make the chips they design, so the company has started sharing their plans for this AI chip to their hardware partners — like smartphone chipmaker Qualcomm. ARM expects to find their machine learning processor in devices by early 2019.

The post AI Smartphones Will Soon Be Standard, Thanks to Machine Learning Chip appeared first on Futurism.

Futurism

NEWSBYTE: ARM launches scalable chips for IoT machine learning

Semiconductor and software giant ARM has announced a new range of scalable processors designed to deliver enhanced machine-learning capabilities to IoT devices.

The Softbank-owned company, which counts Apple, Samsung, drone maker DJI, and FitBit among those who rely on its architecture, is pitching Project Trillium as the industry’s most scalable, versatile ML compute platform.

The Project Trillium suite will, according to Rene Haas, ARM’s president of IP Products Group, aim to find a better balance between energy efficiency and computing power.

“The rapid acceleration of artificial intelligence into edge devices is placing increased requirements for innovation to address compute, while maintaining a power efficient footprint,” he said.

“To meet this demand, ARM is announcing its new ML platform, Project Trillium. New devices will require the high-performance ML and AI capabilities these new processors deliver. Combined with the high degree of flexibility and scalability that our platform provides, our partners can push boundaries of what will be possible across a broad range of devices.”

• SoftBank also owns Aldebaran Robotics, makers of the NAO, Pepper, and Romeo humanoids, and robotic giant Boston Dynamics, formerly part of Alphabet.

Internet of Business says

The edge environment is an increasingly important part of the IoT, and with AI and machine learning being embedded into more and more functions that need to execute in real time, these type of innovation can only grow.

The post NEWSBYTE: ARM launches scalable chips for IoT machine learning appeared first on Internet of Business.

Internet of Business

ARM announces Project Trillium machine learning and neural network IPs

ARM today announced the Project Trillium IP including the new highly scalable processors that are capable of delivering enhanced machine learning and neural network functionality. These new technologies are focused on the mobile market which will enable a new class of ML-equipped devices with advanced computing capabilities and object detection. The Project Trillium is a group of software solutions … Continue reading “ARM announces Project Trillium machine learning and neural network IPs”
Fone Arena

Your next phone may have an ARM machine learning processor

 ARM doesn’t build any chips itself, but its designs are at the core of virtually every CPU in modern smartphones, cameras and IoT devices. So far, the company’s partners have shipped more than 125 billion ARM-based chips. After moving into GPUs in recent years, the company today announced that it will now offer its partners machine learning and dedicated object detection processors. Read More
Mobile – TechCrunch