Get out the guacamole, because you’re going to hear a lot about chips on this week’s Internet of Things Podcast! ARM announced a new architecture for machine learning called Trillium and said it would license an object detection design and one that could handle some basic training at the edge. Amazon, too, is building a chip for its edge devices and machine learning will certainly have a part to play.
Also on this week’s podcast, Stacey and Kevin cover Intel’s smart glasses, Kevin’s opinions on the Apple HomePod and Google’s new IoT hire. They also answer a listener’s question about using different profiles with the Amazon Echo.
The guest this week is Alexandros Marinos, who is the CEO of Resin.io. He discusses the popular hardware platforms for prototyping, the industrial IoT and an up-and-coming platform that is breaking out because of interest in machine learning. He also talks about the similarities and differences between servers and connected devices as it relates to building software to manage them. You’ll learn that servers are like cattle, not like pets.
Almost every major player in the smartphone industry now says that their devices use the power of artificial intelligence (AI), or more specifically, machine learning algorithms. Few devices, however, run their own AI software. That might soon change: thanks to a processor dedicated to machine learning for mobile phones and other smart-home devices, AI smartphones could one day be standard.
British chip design firm ARM, the company behind virtually every chip in today’s smartphones, now wants to put the power of AI into every mobile device. Currently, devices that run AI algorithms depend on servers in the cloud. It’s a rather limited set up, with online connectivity affecting how information is sent back and forth.
Project Trillium would make this process much more efficient. Their built-in AI chip would allow devices to continue running machine learning algorithms even when offline. This reduces data traffic and speeds up processing, while also saving power.
“We analyze compute workloads, work out which bits are taking the time and the power, and look to see if we can improve on our existing processors,” Jem Davies, ARM’s machine learning group head, told the MIT Technology Review. Running machine learning algorithms locally would also mean fewer chances of data slipping through.
A Staple for Mobile Phones
With the advantages machine learning brings to mobile devices, it’s hard not to see this as the future of mobile computing. ARM, however, isn’t exactly the first in trying to make this happen. Apple has already designed and built a “neural engine” as part of the iPhone X’s main chipset, to handle the phone’s artificial neural networks for images and speech processing.
Just one day after MIT revealed that some of its researchers had created a super low-power chip to handle encryption, the institute is back with a neural network chip that reduces power consumption by 95 percent. This feature makes them ideal for bat… Engadget RSS Feed
MIT researchers have developed a chip designed to speed up the hard work of running neural networks, while also reducing the power consumed when doing so dramatically – by up to 95 percent, in fact. The basic concept involves simplifying the chip design so that shuttling of data between different processors on the same chip is taken out of the equation. The big advantage of this new… Read More Mobile – TechCrunch
The Internet of Things hasn't ever been super secure. Hacked smart devices have been blamed for web blackouts, broken internet, spam and phishing attempts and, of course, the coming smart-thing apocalypse. One of the reasons that we haven't seen the… Engadget RSS Feed