As Routines come to digital assistants, what happens to Yonomi, IFTTT and Stringify?

Google Assistant is closing another gap with Alexa today. In a blog post, Google says that it will begin rolling out Routines for its digital helper.

If you’re not familiar with routines, they’re similar to, but more powerful than smart home shortcuts. You can program different actions with a key difference over standard shortcuts: Routines can handle multiple actions with a single command.

For example, I have one called “Relaxation mode” that dims the lights in my office and also tune my office Sonos to New Age music. Instead of using two shortcuts or voice commands, I have both actions happen with a single command. You can even add more actions to a routine, such as dimming lights, closing your shades and firing up Netflix on the TV for a “Movie night” routine.

Alexa already has Routines and Google says it will be rolling them out for its product line in the coming weeks. I can’t wait. Until then, I’ve had to use a third-party app to tie multiple actions to a command. In my case, “Relaxation mode” is a custom routine I set up in a fantastic app called Yonomi. You can accomplish similar results with IFTTT and Stringify.

Here’s the thing though: What happens to these third party services once the major platforms all have native routines?

Granted, some of the native routine support may be limited, so there might still be room for a Yonomi, IFTTT or Stringify. In fact, Google today said “you’ll be able to use six routines that help with your morning, commutes to and from work, and evening at home.” That leads me to believe Google’s first implementation will be somewhat limited. So I’m not uninstalling Yonomi just yet.

Regardless, it’s likely that native integrations and expanded functionality will take the place of third-party services so they’ll either have to pivot, adapt or maybe even go away. Perhaps one of them is purchased since they all have solid user bases and, for the most part, easy to use interfaces.

Stringify is already off the block since it was purchased by Comcast in September. Unless Comcast wants to keep the brand alive, I think Stringify will simply melt into Comcast’s Xfinity smart home line.

I recently suggested that IFTTT would be a smart buy for Amazon. We’ll see if that happens. That would actually leave Yonomi as a prime Google target and since I’m already a Yonomi user with multiple Google Home products, I wouldn’t mind seeing that happen.

In any case, I don’t see much of a long-term future for a standalone, third-party smart home integration service for two reasons: Routines are becoming native features in assistants and because the digital assistant platforms have quickly worked to integrate with as many apps, services and devices as they can.

Stacey on IoT | Internet of Things news and analysis

Nest shows the importance of planned business models for IoT devices

There’s good news for Nest Cam IQ indoor owners today: The smart cameras now have Google Assistant capabilities built in. The feature is optional — you can enable or disable it — and there’s no additional charge for the functionality. If you choose to enable it, you now have another microphone and speaker for home control, informational queries, setting reminders, and more. I wouldn’t suggest playing music through the small speaker found in the Nest camera though.

Nest also expanded its Nest Aware subscription offering with a new five-day plan costing a dollar per day. That’s perfect for non-subscribers or folks who don’t want to pay for a monthly plan if they’re only going on vacation for a few days. Person Alerts are also new for the suite of webcams, helping to identify a person compared to some other moving object in your Activity Zones. Again, no charge for this new feature.

This news reminds me of a recurring theme that we discuss on the IoT Podcast: When it comes to IoT are you buying hardware, services or both? More often than not, the answer is the latter.

But if you bought a Nest Cam IQ, did you expect new services like Google Assistant or not? If you were promised a future service but never got it for your next IoT device, would you be upset? (You probably would and so would I.) Lastly, if you bought an IoT product and the service offerings were scaled back or changed from free to paid services, how would you feel?

All three of these examples highlight the importance of IoT companies clearly defining and communicating their business models, both internally and externally. If they don’t, they run the risk of quickly upsetting loyal customers or failing to account for their true operational costs.

Making sure the Canary is secure costs quite a bit.

Take the recent case of Canary, for example. Last October, the company removed some of its free service features from customers who bought the hardware with an understanding that they’d have such features, even without a paid service plan.

Night Mode, which captures video from motion detection at night when you’re home and presumably sleeping, went from free to paid. Video recordings were limited to just 10 seconds under the scaled down free plan. And downloading or sharing video clips was eliminated unless you decided to now pay the monthly service fee.

That’s a very different approach from the recent Nest news. Some of it very likely has to do with resources. Since Canary isn’t in the business of running cloud servers for its services, it has to pay for Google Cloud, Amazon Web Services or Microsoft Azure in order to provide these capabilities. Being part of Google, Nest has “in-house” cloud services to use.

But that’s irrelevant to the people buying IoT devices: To them (and me, since I’m a Canary owner), they don’t want to feel like they’re in for a “bait and switch” when purchasing a connected home hub, sensor, webcam, door lock, or what have you. That’s why if you plan to sell any type of connected device with some type of service, you have to plan ahead early in your design process. And if you commit to a level of free services, but later have to change them, existing customers should be grandfathered in, if possible.

I’d argue that Nest has done a better job at this than most. And the Canary example is more of an outlier than the norm, thankfully.

However, I’d bet a month’s worth of my Nest Aware subscription that Nest planned for Google Assistant capabilities when designing the Nest Cam IQ before it launched last May. This way it would make sure that the hardware could handle Assistant queries and be loud enough for responses, while at the same time lining up the necessary software to hook into Google’s cloud for digital assistance.

Besides the hardware and software though, Nest surely did the math on costs for Google’s cloud. Maybe those are free or maybe they’re an internal transfer for the accountants. I suspect it’s the latter, along with analysis of how much of the cloud costs could be recouped through growing hardware sales based on new or additional features.

The point is: If you’re in the IoT device business, service planning may be the most important aspect of your product’s life-cycle. Make sure to do your homework well before the product hits the shelves and begin with the customer in mind.

Stacey on IoT | Internet of Things news and analysis

InfluxData and the TICK stack for IoT data streaming

Tesla uses InfluxData in its energy products. Image courtesy of Tesla.

With every advancement in technology we get a new database to get excited about. With the cloud, we started caring about scale, and No-SQL databases rose to the fore. With social networks, graph databases became the hot new thing. And now, with the internet of things, time series databases are getting their day in the sun.

That’s why InfluxData just raised $ 35 million in a round led by Sapphire Ventures. The goal is to expand the company’s database sales beyond its current customers, which include Tesla, IBM, and Nordstrom. At the end of January, another time series database called Timescale raised $ 12.4 million in funding. So the space is hot.

Time series databases aren’t new. Traditionally, they are simply a measurement of the state of some sensor and the time. But now that there are connected sensors that can take in data hundreds of times a day or more, these databases are seeing more action. Plus, in many situations it’s not enough to collect the data and then ship it somewhere as a log. Now people want to take action on that data. And they want to take that action as soon as possible.

This means that time series databases aren’t just handling a greater velocity and volume of data; they also have to analyze it as it streams by. Think of it as the more active version of logging data as performed by companies such as Splunk. There are many time series databases out there, including giants such as GE’s Predix, as well as smaller projects like Riak or Graphite. Many projects started as ways to monitor IT systems and websites, not thermostat readings or automotive data.

In InfluxData’s case, CEO Evan Kaplan touts the speed of the database plus the available suite of tools it works with, which allows developers to monitor IoT assets and query data even as more data is coming in. It also stores data in a compressed format and quickly ditches the dregs it doesn’t need.

Together with tools called Telegraf, Chronograf, and Kapacitor, Kaplan is selling a concept called the TICK stack. It is designed to rapidly ingest and handle data while also giving users the tools to query it. As a lover of many IT stacks—from the historical LAMP (Linux, Apache, MySQL, PHP) stack for web development to the more recent SMACK (Spark, Mesos, Akka, Cassandra, and Kafka) stack for big data—I like the idea of one for the IoT.

However, note that in this case Influx is promoting the tools it is developing as opposed to developers promoting a collection of independent technologies that they have found to work well together. That doesn’t mean it will fail; it’s just a different genesis.

As for revenue, InfluxData has a slightly different model than the traditional open-source efforts. It offers one server for free, and as the database expands, customers will pay for an enterprise license so they can build a larger cluster capable of handling more. Given how much time series data machines throw off, it’s a model that should net it plenty of revenue over time.

Stacey on IoT | Internet of Things news and analysis

Let’s talk about machine learning at the edge

ARM believes its architecture for object detection could find its way into everything from cameras to dive masks. Slide courtesy of ARM.

You can’t hop on an earnings call or pick up a connected product these days without hearing something about AI or machine learning. But as much hype as there is, we are also on the verge of a change in computing that’s as profound as the shift to mobile was a little over a decade ago. In the last few years, the results of that shift have started to emerge.

In 2015, I started writing about how graphics cores—like the ones Nvidia and AMD make —were changing the way companies were training neural networks for machine learning. A huge component of the improvements in computer vision, natural language processing, and real-time translation efforts have been due to the impressive parallel processing graphics processors have.

Even before that, however, I was asking the folks at Qualcomm, Intel, and ARM how they planned to handle the move toward machine learning, both in the cloud and at the edge. For Intel, this conversation felt especially relevant, since it had completely missed the transition to mobile computing and had also failed to develop a new GPU that could handle massively parallel workloads.

Some of these conversations were held in 2013 and 2014. That’s how long the chip vendors have been thinking about the computing needs for machine learning. Yet it took ARM until 2016 to purchase a company with expertise in computer vision, Apical, and only this week did it deliver on a brand-new architecture for machine learning at low power.

Intel bought its way into this space with the acquisition of Movidius and Nervana Systems in 2016. I still don’t know what Qualcomm is doing, but executives there have told me that its experience in mobile means it has an advantage in the internet of things. Separately, in a conference call dedicated to talking about the new Trillium architecture, an ARM executive said that part of the reason for the wait was a need to see which workloads people wanted to run on these machine learning chips.

The jobs that have emerged in this space appear to focus on computer vision, object recognition and detection, natural language processing, and hierarchical activation. Hierarchical activation is where a low-power chip might recognize that a condition is met and then wake a more powerful chip to provide necessary reaction to that condition.

But while the traditional chip vendors were waiting for the market to tell them what it wanted, the big consumer hardware vendors, including Google, Apple, Samsung—and even Amazon—were building their own chip design teams with an eye to machine learning. Google has focused primarily on the cloud with its Tensor Flow Processing Units, although it did develop a special chip for image processing for its Pixel mobile phones. Amazon is building a chip for its consumer hardware using tech from its acquisition of Annapurna Labs in 2015 and its purchase of Blink’s low-power video processing chips back in December.

Some of this technology is designed for smartphones, such as Google’s visual processing core. Even Apple’s chips are finding their way into new devices (the HomePod caries an Apple A8 chip, which first appeared in Apple’s iPhone 6). But others, like the Movidius silicon, use a design that’s made for connected devices like drones or cameras.

The next step in machine learning for the edge will be to build silicon that’s specific for the internet of things. These devices, like ARM’s, will focus on machine learning with incredibly reduced power consumption. Right now, the training of neural networks happens mostly in the cloud and requires massively parallel processing as well as super-fast I/O. Think of I/O as how quickly the chip can move data around between its memory and the processing cores.

But all of that is an expensive power proposition at the edge, which is why most edge machine learning jobs are just the execution of an already established model, or what is called inference. Even in inference, power consumption can be reduced with careful designs. Qualcomm makes an image sensor that that requires less than 2 milliwatts of power, and can run roughly three to five computer vision models for object detection.

But inference might also include some training, thanks to silicon and even better machine learning models. Movidius and ARM are both aiming to let some of their chips actually train at the edge. This could help devices in the home setting learn new wake words for voice control or, in an industrial setting, be used to build models for anomalous event detection.

All of which could have a tremendous impact on privacy and the speed of improvement in connected devices. If a machine can learn without sending data to the cloud, then that data could stay resident on the device itself, under user control. For Apple, this could be a game-changing improvement to its phones and its devices, such as the HomePod. For Amazon, it could lead to a host of new features that are hard-coded in the silicon itself.

For Amazon in particular, this could even raise a question about its future business opportunities. If Amazon produces a good machine learning chip for its Alexa-powered devices, would it share it with other hardware makers seeking to embrace its voice ecosystem, in effect turning Amazon into a chip provider? Apple and Google likely won’t share. And Samsung’s chip business is for its gear and others, so I’d expect its edge machine learning chips to find their way into the world of non-Samsung devices.

For the last decade, custom silicon has been a competitive differentiator for tech giants. What if, thanks to machine learning and the internet of things, it becomes a foothold for a developing ecosystem of smart devices?

Stacey on IoT | Internet of Things news and analysis

IoT news of the week for Feb. 16, 2018

Google is buying Xively for $ 50M: Google, which has apparently seen that it needs to step up its IoT cloud game, said it will purchase the Xively IoT platform from LogMeIn for $ 50 million. Xively is a fine IoT platform that always seemed like a strange addendum to LogMeIn. Before LogMeIn bought it, it was known as Pachube, and was the creation of Usman Haque, a forward thinking individual when it came to sensor data monitoring. Xively was a platform-as -a-service offering that managed much of the difficult cloud connections for devices. Combined with hardware kits, the idea was that a developer could get from idea to a working device quickly without having to understand how to connect things and manage them in the cloud. (Google)

Particle brings mesh networking to IoT devices: Most of my IoT projects these days are DIY, or do-it-yourself, efforts. So it’s exciting to see Particle (formerly known as Spark) bring new wireless technology to its small compute boards. Ranging in price from $ 9 to $ 29, the new third-gen Particle boards merge traditional connections—think LTE, Wi-Fi, and Bluetooth—with mesh technology so each of the sensor boards can transmit to each other, helping with overall connectivity and data transfer. In other words, not all of your IoT devices need their own internet connection, which can reduce device costs. With Particle’s mesh technology and Thread network support, a non-internet-connected sensor could still transmit its data over the web by using other Particle products on the mesh network, since each is a gateway. Check this video for the full story. (Particle)

Intel-powered drones win Olympic Gold: If you missed the 2018 Winter Olympic opening ceremonies, you missed quite a show. And yet the best performers weren’t even people, but the 1,218 drones with their amazingly choreographed light show, which dazzled. Wired explains how they did it using Intel’s Shooting Star drones. (Wired)

Wearable tech is also on tap for the Winter Olympics: Drones aren’t the only IoT-related things at this year’s Winter Games. Smart clothes and other wearable technology are part of the events, ranging from self-heating jackets with connected apps to speed skating suits that send real-time training data to coaches and skaters. Those sound a little more useful to me than the Halo headsets being used by the U.S. Ski Team: Halo sends energy pulses to a skier’s brain to “prime” their performance. I’ll stick with the warm jacket, thank you. (Gadgets and Wearables)

Another co-founder flies from the Nest: Google’s re-absorption of Nest from Alphabet won’t just impact development teams and supply chain management. The last remaining co-founder of Nest, Matt Rogers, is leaving the team as well. This week, Rogers told CNET that he’ll help the hardware team plan its 2019 roadmap and assist with the re-integration of Nest’s team into Google. After that, though, he’s walking out the door and essentially out of smart home hardware creation. Instead, Rogers plans to focus on Incite.org, a venture firm and labs group he co-founded with Swati Mylavarapu. It’s hard to believe that just six months ago Stacey interviewed Rogers to hear more about Nest’s security products. (CNET)

Faster, more power-efficient encryption at the edge: With recent stories about how much electricity Bitcoin mining gobbles up, it’s nice to see some focus on power efficiency. That’s what MIT has done with a new chip said to increase the speed of public-key encryption on devices by a factor of 500. While the speed is welcome—device encryption processes typically aren’t quick—even better is that the hardware approach reduces the encryption power requirements to just 1/400th of the energy of a software encryption approach. This is important for IoT devices at the edge of a network, which can run on small batteries and therefore need to conserve every milliwatt of power they can. Watch for more ASICs, or application-specific integrated circuits, as our IoT needs continue to expand beyond traditional software solutions. (MIT)

What are the impacts of driverless cars? Let me count the ways: This list of 73 implications of autonomous vehicles is a super read, because it’s one thing to talk about a driverless-car future from the perspective of the technology, but it’s another when you consider the numerous impacts caused by the technology. Think of reductions in traffic policing, for example, a possible decrease in demand for car ownership, or major disruption to the automobile insurance industry. I’m not typically a fan of list-like articles, but this one from Geoff Nesnow is worth an exception to the rule. (Medium)

LimeBike raises $ 70M for real estate companies to offer dockless bikes: When I visited Scottsdale, Arizona over the Christmas holiday, I couldn’t walk more than 100 feet without seeing what looked like a discarded neon green bicycle. Upon closer inspection, I found out these were LimeBikes: cycles used for inexpensive rides with the idea of leaving the bike at your destination. LimeBikes use a connected lock, integrated GPS, and mobile app for the ride. Now, the company has raised another $ 70 million (for a total of $ 132 million) to make it easier to find and store bikes at large, managed real estate properties through dedicated parking spaces. It’s a smart move because it provides centralized accessibility in places where there might be a large number of customers looking for quick and cheap mobility. (Forbes)

Misty wants a robot in every house: You’re likely familiar with Sphero, the company that makes a small, $ 100 robotic ball. You may not, however, know about Misty Robotics, which spun out of Sphero for a different market. Misty is targeted for a developer edition release this month at a cost of $ 1,500. The idea is that a more feature-packed and easily programmable robot could lead to less of a toy and more of a functional assistant based on what developers create with Misty. Using dual treads, Misty can roam around your home either autonomously or programmatically. And she has far more smarts than a Sphero, thanks to a pair of Qualcomm Snapdragon chips (found in most smartphones), a light sensor for mapping, digital camera, microphone, speakers, and USB ports. And a 4.3-inch touchscreen shows Misty’s “emotions” based on information or activities. Using either Blocky or Javascript along with Misty APIs, she looks relatively easy to program. Perhaps Misty is on tap for my next project! (Fast Company)

Another day, another botnet. Where’s the fix?: I doubt we’ll ever see the end of botnet attacks on devices, but we do need to see the end of infected devices that may never get patched. The Satori botnet infected 100,000 devices in just half a day back in December, and plenty of device makers did what they’re supposed to do and provided patches to address the issue. Dasan isn’t one of those device makers, though. More than 40,000 Dasan-built routers are still exploitable by Satori and the company reportedly still hasn’t responded to a December advisory explaining that its routers infected by Satori allow for unauthorized remote code execution. The public needs to continue putting pressure on device makers that don’t take quick action in case of security challenges. Keep voting with your dollars in the meantime. (Ars Technica)

HomePod’s smarts are in the speaker engineering, not in Siri: This is a bit of a personal plug since I reviewed Apple’s HomePod earlier this week. Most of the early reviews were based on the HomePod experience through a combination of Apple briefings and personal use. I found most of those to be less critical (and filled with far more positive superlatives) than reviewers who, like us, simply bought our own HomePod. Maybe it’s just me, but I wasn’t as blown away by the sound as early reviewers. And I find it difficult to give Siri a pass when she’s smarter on the iPad and iPhone than she is inside HomePod. (StaceyOnIoT)

Stacey on IoT | Internet of Things news and analysis