Not long after Apple finally entered the smart speaker market with the HomePod, it appears there’s further competition on the way. Job ads suggest that Spotify is planning to enter production with its own speaker.
It was clear as long ago as last April that Spotify was at least exploring the idea, but new job ads suggest that the company may soon be gearing up for manufacturing …
So, if you can’t wait for the actual event on February 25th, here’s everything we think we know about the Galaxy S9 so far:
What it looks like:
As is practically tradition at this point, numerous official renderings of the S9 have leaked out (mostly from VentureBeat’s Evan Blass). As expected, it looks like Samsung is by and large keeping the same design as the Galaxy S8. The same edge-to-edge “Infinity Display” is here, as are the oblong bezels on…
You can’t hop on an earnings call or pick up a connected product these days without hearing something about AI or machine learning. But as much hype as there is, we are also on the verge of a change in computing that’s as profound as the shift to mobile was a little over a decade ago. In the last few years, the results of that shift have started to emerge.
In 2015, I started writing about how graphics cores—like the ones Nvidia and AMD make —were changing the way companies were training neural networks for machine learning. A huge component of the improvements in computer vision, natural language processing, and real-time translation efforts have been due to the impressive parallel processing graphics processors have.
Even before that, however, I was asking the folks at Qualcomm, Intel, and ARM how they planned to handle the move toward machine learning, both in the cloud and at the edge. For Intel, this conversation felt especially relevant, since it had completely missed the transition to mobile computing and had also failed to develop a new GPU that could handle massively parallel workloads.
Some of these conversations were held in 2013 and 2014. That’s how long the chip vendors have been thinking about the computing needs for machine learning. Yet it took ARM until 2016 to purchase a company with expertise in computer vision, Apical, and only this week did it deliver on a brand-new architecture for machine learning at low power.
Intel bought its way into this space with the acquisition of Movidius and Nervana Systems in 2016. I still don’t know what Qualcomm is doing, but executives there have told me that its experience in mobile means it has an advantage in the internet of things. Separately, in a conference call dedicated to talking about the new Trillium architecture, an ARM executive said that part of the reason for the wait was a need to see which workloads people wanted to run on these machine learning chips.
The jobs that have emerged in this space appear to focus on computer vision, object recognition and detection, natural language processing, and hierarchical activation. Hierarchical activation is where a low-power chip might recognize that a condition is met and then wake a more powerful chip to provide necessary reaction to that condition.
But while the traditional chip vendors were waiting for the market to tell them what it wanted, the big consumer hardware vendors, including Google, Apple, Samsung—and even Amazon—were building their own chip design teams with an eye to machine learning. Google has focused primarily on the cloud with its Tensor Flow Processing Units, although it did develop a special chip for image processing for its Pixel mobile phones. Amazon is building a chip for its consumer hardware using tech from its acquisition of Annapurna Labs in 2015 and its purchase of Blink’s low-power video processing chips back in December.
Some of this technology is designed for smartphones, such as Google’s visual processing core. Even Apple’s chips are finding their way into new devices (the HomePod caries an Apple A8 chip, which first appeared in Apple’s iPhone 6). But others, like the Movidius silicon, use a design that’s made for connected devices like drones or cameras.
The next step in machine learning for the edge will be to build silicon that’s specific for the internet of things. These devices, like ARM’s, will focus on machine learning with incredibly reduced power consumption. Right now, the training of neural networks happens mostly in the cloud and requires massively parallel processing as well as super-fast I/O. Think of I/O as how quickly the chip can move data around between its memory and the processing cores.
But all of that is an expensive power proposition at the edge, which is why most edge machine learning jobs are just the execution of an already established model, or what is called inference. Even in inference, power consumption can be reduced with careful designs. Qualcomm makes an image sensor that that requires less than 2 milliwatts of power, and can run roughly three to five computer vision models for object detection.
But inference might also include some training, thanks to silicon and even better machine learning models. Movidius and ARM are both aiming to let some of their chips actually train at the edge. This could help devices in the home setting learn new wake words for voice control or, in an industrial setting, be used to build models for anomalous event detection.
All of which could have a tremendous impact on privacy and the speed of improvement in connected devices. If a machine can learn without sending data to the cloud, then that data could stay resident on the device itself, under user control. For Apple, this could be a game-changing improvement to its phones and its devices, such as the HomePod. For Amazon, it could lead to a host of new features that are hard-coded in the silicon itself.
For Amazon in particular, this could even raise a question about its future business opportunities. If Amazon produces a good machine learning chip for its Alexa-powered devices, would it share it with other hardware makers seeking to embrace its voice ecosystem, in effect turning Amazon into a chip provider? Apple and Google likely won’t share. And Samsung’s chip business is for its gear and others, so I’d expect its edge machine learning chips to find their way into the world of non-Samsung devices.
For the last decade, custom silicon has been a competitive differentiator for tech giants. What if, thanks to machine learning and the internet of things, it becomes a foothold for a developing ecosystem of smart devices?
It can be difficult to find time to finish a video game, especially if you only have a few hours a week to play. In our new biweekly column Short Play we suggest video games that can be started and finished in a weekend, and since it’s a long weekend in the US this one is a little longer than normal.
Night in the Woods is the story of Mae, a 20-year-old college sophomore, returning home for the first time in almost two years after deciding to drop out of school. It was originally released last February on PC and PlayStation 4, after a successful Kickstarter in 2013. I picked it up recently when the enhanced version, called Night in the Woods: Weird Autumn, came to the Switch earlier this month. At a time when there is constant news of…
Snapchat may not have the widest audience compared to social juggernauts like Facebook, but there’s a core of users that have stuck by the service even as competitors like Instagram integrated similar features. Of course, we’ve been more than a bit critical of Snapchat and it’s terrible Android performance over the years—late last year the company promised to (finally) make an effort on the platform. But over the last week, Snapchat has been rolling out a controversial new redesign.