Like the rest of the tech media, Kevin and I kick off the show with a discussion about data collection and privacy in light of the allegations against Cambridge Analytica. We also talk about smart home lock in, Alexa’s new “brief” mode, shopping on Google Home and my IoT Spring Clean. IBM’s new crypto chip and Watson Assistant made the show as well as several industrial IoT news bits such as Foghorn’s industrial IoT integration with Google’s cloud and a new hardware platform for IIoT from Resin.io. We also answer a listener question about IoT for new parents. Plus I spoke with David Kaiserman, president with Lennar Ventures, the investment arm of Lennar Homebuilders. Kaiserman walked me through a Lennar home outfitted with a bunch of smarts, and shared his thoughts on what matters to buyers and the gear inside. He also sheds light on Amazon’s Alexa-focused geek squad and explains why Lennar backed out of its plans for a Apple HomeKit home and banked on Alexa instead. Enjoy.
I’ve been testing the SmartThings Link for the past week and it’s a great, inexpensive add-on if you have a Shield TV. (By the way, $ 179.99 Shield TV on its own is a fantastic Android TV and Chromecast device; I use it all the time in my home office where it’s connected to a 4K set.) I was lucky to be among the first buyers of Samsung’s SmartThings USB stick, and paid just $ 9.99 for it. These days, you’ll find it for the full price of $ 39.99, which is still a good deal.
There isn’t much to the Link. The main purpose is to add both a Zigbee and Z-Wave radio to the Nvidia Shield TV, which already has both Wi-Fi and Bluetooth radios as well as a Gigabit ethernet port. From what I can tell, all of the hub processing takes place on the Shield TV, and not the the USB stick. For comparison the Smartthings hub which has the same array of radios and no Android TV/Chromecast capability costs $ 89.99.
Installation is super simple as well. You just open the SmartThings app on the Shield TV, plug the Link into the Shield, sign in to or create a Samsung account and then link the “hub” to your SmartThings phone app. Two notes though: First, there are now two SmartThings mobile apps. During the setup process, I tried to use the newest SmartThings app but it didn’t work. Instead, I had to use the original mobile app, which is now called SmartThings Classic. That actually may be a good thing though, since the newer app hasn’t been very well received. And secondly, the Link is pretty wide so if, like me, you’re using one of the two USB ports on Shield TV for additional storage, you’ll need the small USB extender cable included with the Link.
If you’re familiar with SmartThings or already have a SmartThings hub, there’s nothing new here to see. For the rest of us, you get the functionality of Samsung’s SmartThings Hub product without actually owning the hub.
Just about everything you can do on Samsung’s native hub can be done with the Shield TV and SmartThings Link. I say “just about everything” because Samsung has a history of updating the firmware on its own hub first, leaving the Link running an older version. So new features that come to Samsung’s SmartThings Hub may not appear on the Link for some time. Regardless, you can add the same supported devices to the Link, set up automations and routines. I haven’t found any major technical differences between the tested setup and an actual SmartThings Hub.
Since I’ve built my smart home around a Wink Hub 2, I only tested a few devices with the Link: A bulb, a lock and a motion sensor. All of them work just as they do when connected to my Wink Hub. And although I’ve long preferred Wink, Samsung does have one key advantage when it comes to device compatibility, which I love.
If you purchase a smart device that isn’t compatible with SmartThings, you may still be able to use it. That’s because Samsung allows you to create or install device handlers so that the Link (or SmartThings Hub) can control it.
In fact, I have two non-supported Z-Wave devices that I’m testing now with the Link because I was provided unpublished device handlers for them. More on those in a separate review is coming, but the point is this: Device handlers are handy to have. Wink doesn’t support them, so I’m considering a full-scale change over to SmartThings.
I did have one concern about the Link before setup, but it turned out to be unfounded. I thought that my TV would have to be on for the Link to work, since the Shield drives all content to the set. Indeed, when using Nvidia’s Shield TV, the set-top box lights up green, so you know it’s on. Even when that green light is off and the Shield TV is in sleep mode, however, the Link hub works. I should have realized this because I often use a Google Home voice command to turn on the Shield TV. It works every time because the set-top box is just sleeping, not completely off.
Speaking of Google, you can link Google Assistant or Home with the SmartThings Link to use voice commands and control connected home devices. Even if you don’t own a Google Home or have Assistant installed on your phone, this works through the microphone inside Nvidia Shield TV’s remote. Sadly, Nvidia hasn’t yet delivered on the low-cost Google Assistant microphone called Nvidia Spot it announced in January 2017. If it ever does, I would fully expect these to work with the SmartThings Link as well. Those that prefer using Alexa can do so without any hassle.
Overall, I’m impressed by this little USB stick. Granted, I already spent money on the Nvidia Shield TV; if you haven’t or if you’re not in the market for a new set-top box, this isn’t for you.
Even if you own a Shield TV, you may want to pass on the Link based on where your connected TV is. This isn’t like a hub that you can place directly in the center of your home for maximum range. I’m just lucky that my home office is is the right spot and can reach all of my connected devices, such as those with limited range that use Bluetooth or Zigbee, for example. Thanks to a simple setup, device flexibility and strong voice integration, I just may retire my Wink Hub 2 in favor of the SmartThings Link.
Just when I thought the market for home security solutions was saturated, Alula, a company formed by merging ipDatatel and Resolution Products, launched this week with a product that offers security systems to dealers. It’s designed for dealers, but it purportedly lets them offer a DIY-type solution to their customers as well.
Alula Holdings CEO Brian McLaughlin says the company was created to help dealers compete in the rapidly changing environment for home security. Resolution Products made the physical hardware for the security systems while ipDatatel provided the connectivity. The two companies combined to create Alula, likely one of many consolidations to come in the residential and business security market.
Dealers can buy the gear and connectivity from Alula and use it to create any type of business model they want. They can resell the gear for a premium and let the user handle everything, or they can sell it at some level of loss and sign the user to a contract. This latter business model is common across large security companies such as Alarm.com and ADT.
What’s changed is the landscape of DIY options. Thanks to devices like the Nest Cam or easily linked door and window sensors, a whole new market for security gear has opened up. The people in this market might spend $ 500 to $ 1,000 on a few security products linked up through a smartphone app with no monitoring fees. Instead of getting a call from the police if and when a problem occurs, they’ll get a notification on their smartphone.
There are obvious downfalls to this scenario (to start, not everyone checks their smartphone for every notification), but for many people security doesn’t mean an alarm that calls the cops when it goes off. They just want to know what’s happening in their homes. The big security firms have responded to this newer market with their own low-cost systems and lower monitoring fees. For example, ADT offers a package of sensors and a camera with SmartThings.
ISPs are also looking at this space as an opportunity, such as with Comcast offering security and home automation through its Xfinity Home product. Meanwhile, Nest has a home security system offering that a user can connect and hire someone to monitor if they choose to do so. Nest’s system costs $ 500 for a hub, two sensors that offer motion and open/close detection, and two person tags that a user can swipe to arm or disarm the system. For $ 250 more, you can get a Nest Cam or a Nest video doorbell. Another vendor that offers DIY security, SimpliSafe, charges $ 359 for three open/close sensors, a motion detector, a camera, a hub, and a keypad.
We can’t make a direct comparison between Alula’s gear and DIY options because the dealer ultimately will decide how to price the system, but the company sent me two theoretical options on a configuration I set. Alula says the consumer cost of an outdoor camera, two door sensors, two window sensors, an indoor camera, and a motion sensor all tied to the Alula Connect+ with central station monitoring would likely be priced at $ 29.99 for a three-year contract with no up-front product cost. Without a contract, the system would cost about $ 500 up front, with monitoring for $ 9.99 a month. In this configuration there is no keypad, and the user would use his or her phone to arm and disarm the system.
Instead of focusing on the gear, I think that in order to remain competitive, the next-generation security company should think about services and tweaks to its product lines that incorporate new devices and machine learning. It’s asinine to think that a security system isn’t going to tie directly in with the home automation system to handle things like lights and gleaning information about which person is home and their preferred security settings.
While bringing in products and connectivity is a start, it’s not enough to keep a dealer in the fight. Alula should add more context-aware sensors and invest in integrations with devices that can offer that context. Much like Amazon or even Google, a true security system will just be part of an overarching AI system that controls the home.
Routines, scenes and automations, oh my! These are some of the advanced smart home features that our digital assistants have or recently gained, but what are they and which are making our homes smarter? Let’s start with what I think are the lowest tier: Routines and Scenes, which are basically the same but have different names.
There are some minor differences between the implementations taken by Amazon, Apple and Google, but over time, I’m sure all three companies will improve the ability to use one command to control multiple actions. At least I hope so. Google Assistant Routines are limited to six different presets (shown below) and for now you can only choose from a select group of smart home actions, for example.
This is good momentum for the smart home. But the reality is that routines or scenes, aren’t truly “smart”. All these do is extend a voice command from a one-to-one action to a one-to-many action. Essentially, this is still a UI, or user interface tweak.
Automations though? Those are a step above routines in my book because they make your home take actions on device-triggered or time of day events. And they don’t require any voice interaction, which is extremely useful in certain situations. They do, however, require a smart home hub or some other centralized smart home “brain”, unless you want to use a third-party application that can tie some of your devices together.
I’ve had some readers suggest that we don’t need hubs. Instead they argue that we need defined IoT standards or that we can just use the cloud as a hub.
Those are valid thoughts, but the reality is that widespread IoT standards aren’t coming anytime soon, if ever. Using the cloud is great until your home’s internet connection goes down: In that case a local copy of your smart home devices with automation rules running on a small computing device would work, but that essentially is a hub.
Hubs solve a key challenge: individual devices in the smart home typically don’t know about each other. Instead, a hub is what bridges data from smart switches, bulbs, locks, webcams and sensors that can all use different radio technologies. Put another way: The hub is the traffic cop in the intersection of data created by all of our smart devices. It knows the user-programmed rules of who should do what and when in the smart home.
Let me offer a simple example: If I want to walk through my front door at 10pm and have the inside lights automatically turn on, how can the smart door lock tell the lights to illuminate?
Currently, it can’t. The lock and lights have no relationship that they know of. Nor do they have any processing power to programmatically make some cause-effect event happen. That’s where the hub comes in. It can see from the door lock that I’m home. And using a rules based system, along with the time of day, it can tell the smart lights to turn on through automation.
This is is why I think it’s so important that Google create a hub, especially since both Amazon and Apple both have one for the home. Granted, the Amazon Echo Plus doesn’t yet support such automations but it’s a matter of time before that happens. If you have an Apple TV, HomePod, or plugged in iPad, you’ve got a hub can automate your HomeKit devices. Apple’s implementation is in the Home app, and it’s surprisingly easy to use.
In my vision of the smart home capability ladder, routines sit below automations because they make the home experience “smarter” with less user input. Currently, automations are as smart as the home gets.
Above automations though, I look to autonomy. By autonomy, I mean a central home hub that combines context, user patterns and personal information to anticipate actions and even suggest them to us. But, that’s a ways off just yet, so for now, I’ll be content to use routines and automation in my home.
This is what we all fear when talking about security and IoT: A few weeks back I wrote about a vulnerability in Schneider Electric’s controllers that was exploited and the company’s reaction. But the New York Times lays out some very scary facts around a flawed attack of a petrochemical plant in Saudi Arabia. Basically the story acts as confirmation that our worst fears around critical infrastructure and how vulnerable it can be to malicious acts by nation states. It appears that the attacks required physical access to the plant’s network, rather than being uploaded using the internet, but the fact is that as we move to digital control of devices, even those that aren’t connected to public networks are vulnerable. (NYT)
DCI Alexa at your service: Police in England are letting citizens hear about police reports and crimes committed in their area via their Amazon Echo. The next step is letting people report crimes via Alexa. There are a lot of questions about this idea, ranging from what kind of help one could expect to how soon would one get it after calling Alexa. I also wonder how it further divides people along class or even privacy-loving lines. (The Intercept)
Alexa gets a business degree? In an interview with Werner Vogels, the CTO of Amazon, he talks about plans to introduce Alexa to business environments. The article glides across complex topics such as making sure companies can write their own skills for Alexa and if their software can integrate with the smart speaker. I worry that until the challenge of matching specific voices to requests is met, Alexa will mostly be there to make conference calls run a little more smoothly vs. offering up much business insight. There are so many different layers of access and permissions in business environments that we can’t really replicate using just our voice as the key. But walking into an empty conference room and asking Alexa to book it for a future meeting would be handy. (Axios)
Nest has a new temperature sensor: Whelp, it’s about time! Nest finally has a temperature sensor that will work with its thermostats, allowing users to take temperatures from around their house, not just where the thermostat is. The sensors cost $ 39 for one or $ 99 for a three-pack. Nest also released its video doorbell options, giving it a credible (but expensive) home security system offering. (CNET)
L.L. Bean backs out of IoT plans: In February, L.L. Bean teamed up with sensor maker Loomia to put connected tags in the sportswear company’s boots and jackets. It wanted to use the sensors and the blockchain to track what happened to its gear after it was sold. The goal was to see how often it was worn and in what conditions. It has now quit the project after concerned consumers worried that L.L. Bean would be tracking everything from their location to their activities in the sensor-laden gear. I’ve said this before, and now seems like a good time to say it again: Without reliable and known privacy safeguards, the internet of things will never win over the mainstream. We need regulations, and companies should be helping to drive them forward. (WSJ)
AI researchers need to develop a security mindset: How often have you sat at a traffic light marveling that the only reason there aren’t more deaths behind the wheel of a car is that most people understand and follow arbitrary rules? Most people don’t run red lights, for example. Most people try to stay in their lane. (This isn’t necessarily true in other parts of the world, but no matter where you are, there are still rules that everyone has agreed to follow.) Unfortunately when everything’s connected we have to assume that everything is vulnerable to the few who don’t follow the rules. And that includes those designing AI for use in the real world. This article talks about how researchers are being spoofed by those who want to test or disprove the validity of AI models, and argues that when one pixel can be changed to fool a computer, designers of automated systems need to start thinking like their adversaries, much like security researchers do. (Wired)
Is data portability the answer to data monopolies? The advantage that big platforms have when it comes to bettering their products using machine learning is astonishing. Good AI depends on good data sets to train the AI, and giant tech companies have this data in droves. But in D.C., regulators are starting to ask questions about what it means that competitors can’t seem to accrue the same data advantage. One solution is to make it easier for consumers to port their extant social graphs to other platforms. In practice, such an effort would be difficult, given how this type of data is stored and how interlinked it is. Other solutions involve escrows or regulatory bodies that might oversee the tech giants. I think that as the GDPR regulations take effect in Europe, we’ll see a lot of innovation on this front, so it may make sense to wait a few more months and see how things unfold. (Forbes)
Meet the new heads of the Industrial Internet Consortium: This week, the Industrial Internet Consortium changed its leadership, bringing in four men from Dell, Huawei, Real-Time Innovations, and Bosch. The IIC launched with much fanfare in 2014 and has since worked to create frameworks for various industrial internet test cases. So instead of watching over a specific protocol, the organization helps its members pull together entire projects and certifies those as interoperable. Outside of the members, I haven’t encountered a lot of companies using IIC-certified frameworks, but let me know if you do. (BusinessWire)
Check out a new guild for DIYers: My friend, Dr. Lucy Rogers, who has appeared on the podcast, has set up a new effort for those of us who love Arduinos and playing with solder or glue. The Guild of Makers launches Friday in the UK, and I’d love to see something like this form in the U.S. as a resource for folks who are trying to figure out how to flash a Pi or knit a sweater. (Guild of Makers)
The 2018 Design in Tech report is out: John Maeda thinks on the topic of design and how we should bring design principles to everyday experiences mediated through technology. This may mean a focus on web site design, the flow of a user through a shopping site, or even how a user interacts with a connected object. A big theme in this year’s report is how we experience AI-driven choices in our tech lives. The report also represents an experiment of sorts for Maeda, who learned various coding practices in order to make the report more interactive and responsive. Sadly, it doesn’t degrade gracefully and the resulting slides can be difficult to read. This feels like a design mistake, especially when connectivity isn’t always a given. (Design in Tech)
This week, Cloudflare introduced its Workers platform to the world as a new form of edge computing. The news is worth taking a closer look at given all the intense focus on edge computing today. For example, the telcos are all pushing forward with their version of edge computing, contained on servers at the edge of their cellular networks.
And not a week goes by without some startup claiming it has a new edge computing platform or tool. Part of the ubiquity of the phrase “edge computing” comes from the fact that every player in the IoT thinks of the edge in a different way.
Sensor companies think of the edge as tiny, battery-powered devices that gather data, while industrial manufacturers consider it a computer on a machine that gathers data from multiple sensors. Intel and Dell think of the edge as a gateway, or as servers on a factory floor. While the telcos — along with content delivery and internet security provider Cloudflare — view the edge as the limits of their own networks.
For Matthew Prince, CEO of Cloudflare, the edge touted by industrialists and sensor folks will eventually disappear. “Any on-premise devices are going away,” he says. Instead, he sees a future where there is device-side computing, back-end computing in the cloud, and what he calls the “third place” of computing, which happens in between those two.
The benefits of such an architecture are that a company can take advantage of computing power that’s geographically closer to the device, and build devices at the edge that are cheaper because they have no need for big CPUs. As an added bonus, because those devices connect through Cloudflare’s network, they aren’t directly on the public internet and as such, have some security protection. The downside to this architecture is that when the internet fails, so do all the programs you have running in the cloud. Basically one might make the trade-off of putting expensive compute chips in an edge device to putting in dual forms of connectivity.
I’m not sure all on-premise devices will go away, especially not in the next five to 10 years, but I do think the idea of having a third place for computing makes sense. Some of the examples Prince offered by way of customer stories really resonate. For example, a company building an edge device designed to take in constant data, such as a thermometer, could send the data to a Cloudflare Worker program that aggregates it and then sends a sample to the cloud for storage or for processing later on. But if the temperature data spikes, the Worker program can take action and send an alert to the end user.
And ideally, that alert would take less time to reach the end user and would be more resilient than a function hosted on the cloud that’s dependent on a single data center location. Another advantage of this approach is that it makes managing the equipment a bit easier. In the temperature sensing example, for instance, the end user just has to buy the sensors tied to the Cloudflare Worker program and put them in his or her location.
As those sensors age, they can be updated remotely and even replaced without having to futz with a gateway box. One of the more challenging aspects of deploying IoT offerings is that provisioning connected devices can be a nightmare of typing in passwords or snapping pictures of QR codes. In this case, devices can arrive pre- provisioned.
What I’d like to see is a robust discussion of the merits of each approach and a clear understanding of their related trade-offs. There’s obviously an opportunity for this version of edge computing with some connected devices, especially those that need to be cheap and easily deployed.
Location has been a mainstay of the mobile internet for more than decade. Using GPS in phones has enabled all kinds of innovative applications, from Waze to Uber. But GPS isn’t a match for the internet of things. It hogs battery power, doesn’t work well indoors, and GPS modules are expensive to put into products.
Which is why a crop of startups and big companies are trying to find other options for locating devices that won’t cost a lot or drain batteries. And it would be awesome if they worked well indoors — or better yet, in three dimensions, so you could see if an object was on the fourth floor or the fifth. Hoopo is one of the startups that thinks it has mastered this challenge.
Hoopo uses existing low-power wide-area networks to track goods and services in a set area. It uses triangulation to find tiny tags placed on pallets, vehicles, or whatever other equipment a client wants monitored. Currently, Hoopo’s technology can work on LoRa networks, although it isn’t confined to that radio standard.
The Israeli company has raised $ 1.5 million to build out its tags and the necessary gateways. Its CEO, Ittay Hayut, says he sees a market for tracking things as diverse as cattle on farms to managing medical equipment in hospitals. Hayut’s contention that the IoT needs low-power location tracking technologies is a common one.
Other companies are trying to get granular location without GPS as well. For example, PoLTE uses triangulation of cellular signals to determine the placement of a device. It recently raised an undisclosed Series A round, although the company has existed for at least the last nine years. PoLTE doesn’t use tags, but instead uses a device’s SIM card. It sells its software and an appliance to run its software to carriers that then implement it into their networks.
The operators then sell the location services as part of their IoT solutions. PoLTE has signed deals to get its software into a variety of modems and can deliver location data between 2 meters and 6 meters. It’s not able to offer location in three dimensions yet, but is working on it.
Locating things without sucking up a lot of power will go beyond letting companies track people and assets. It could also lead to new ownership models for expensive gear and expand our understanding of the world. For example, loaning out a ladder to a neighbor is easier when you can see exactly where that ladder is. Or in the case of the environment, low-power tracking lets us monitor small creatures that a GPS module might overwhelm.
So while initial use cases will be around asset tracking and fleet management, low-power geolocation will enable a new wave of startups and innovation in the years to come.
Although I wasn’t very impressed by the smart home smarts of Apple’s HomePod, the device does share something in common with the latest Amazon Echo product: It works as a smart home hub. Surprisingly, Google doesn’t have a hub product even though its peers in the digital assistant space do. It’s time for that that to change.
To be clear, Google Home and Google Assistant are very effective at controlling smart devices in the home. And Google is in the process of adding Routines, which appear to be similar to smart home “scenes”. While those are certainly useful, they still rely on voice activation and in some cases voice is not the best user interface. A perfect example is when I’m up late in my home office and everyone else is asleep. Calling out to a Google Home or having it respond to my command can wake everyone up. There’s also no automation built in to routines and I still feel that’s the missing piece of the puzzle when it comes to making our homes “smart”.
Such automations are generally done at the hub level and as I noted: Google Home is not a hub by definition. It doesn’t process data from various smart home devices and take actions based on that data. Instead, it patiently sits there, passively waiting for a voice command. Google Home devices also don’t have the radios needed to be a true hub: There’s no Zigbee or Z-Wave radio inside of a Google Home device. At least not yet.
I’m thinking there should be. Or rather, Google should take a page out of Amazon’s playbook and create its version of the Amazon Echo Plus, which does have a Zigbee radio in addition to Wi-Fi and Bluetooth support. I’d take it a step further though because the Amazon Echo Plus still doesn’t natively support the automation features you find in traditional smart home hubs such as Samsung’s SmartThings or the Wink Hub.
Sure, you can use third-party services for device automation (Yonomi, Stringify and IFTTT are perfect examples), but a simple native solution would be better. And one of those services is already part of a smart home entity. Comcast purchased Stringify last year to help expand the functionality of its Xfinity Home services. I also noted last year that it might make sense for Amazon to purchase IFTTT and, more recently, thought that Yonomi would be a good candidate for Google to purchase.
Whether Google does or doesn’t make a purchase like this for future automations, it still makes sense to build its own smart home hub. Or maybe it buys Wink for its hub technology, branding, reach and the fact that some Wink products already run Android: The Wink Relay is a perfect example of that.
Without its own hub product, any Google home automation efforts are constrained to working with third-party device makers and the APIs they may (or may not) provide. By centralizing the smarts into a Google-made hub, the company can work with device makers on creating more capable APIs in a faster time frame. Heck, Google could use the existing Robots feature as an automation framework if it decided to buy Wink.
And then there’s the data. You can’t have a conversation about any Google effort without talking about the data it could capture and then use to better understand user behavior.
Today, a Google Home can tell the company when you used voice commands to turn a light on or when you locked a smart door lock on your way out of the house. Automations would provide Google even more information because they’re an entirely different method of how a smart home behaves.
That sounds more than a little creepy, of course. Instead of using a GPS-enabled phone to see when you leave home, for example, Google would know that someone left the house based on data from a Google Home hub. Tie that data into security cams and add a little image recognition for good measure, and Google would know who it is that left.
Today, the company does this from a phone. When I worked in New York City, for example, Google recognized this commute because I configured my Home and Work locations in Google Maps. With a hub, it wouldn’t need to use Maps or my phone data to see that I may have left for work. And it could then have my home do certain things upon leaving, such as arm a home security system.
While you or I may not be thrilled that a company has the information, I do want a smarter home that knows when to enable my security system without any interaction my part, although though some systems, like the Canary that I bought, do that today. Even the Nest security system relies on tags or the user manually arming or disarming the system from the app. That seems like a lost opportunity.
Maybe Google is OK being “behind” Apple and Amazon when it comes to smart home hubs. However, I’m leery of that. In fact, I thought Google’s OnHub would be a smart home hub, but that didn’t happen. I still have an OnHub and inside it is a Zigbee radio that Google never used. Instead, Google updated the firmware to make the OnHub a functional duplicate of its Google WiFi mesh networking products. Oddly, the Google WiFi products have a Zigbee radio as well; something I didn’t know initially because Google doesn’t list that on the product tech specs. (Thanks Lateef for pointing this out in a tear down link!) And yet, newer models of Google WiFi don’t have Zigbee support, per FCC testing documents. I don’t think it’s likely that Google will enable Zigbee at some point in the older units, but it’s possible.
I wouldn’t be surprised at all if Google adds a hub product to the Google Home line this May at Google I/O. Google Assistant and voice control is only part of the smart home solution and Google is surely smart enough to recognize that.
(Update: This post was updated at 3:14pm on 3/12/2018 regarding Zigbee radios in some Google WiFi units.)
Application programming interfaces, or APIs, have become the currency of the digital era. They are the link between devices, web sites, and services and as such, can have an outsized effect on your user experience. As a case in point, consider my frustration with Google Home and its inability to play the music I want well.
A friend at Google who looked into this for me said that my lackluster experience was likely due to a poor integration of the Spotify API with the Google Home. So after hearing APIs be blamed for frustrations in my personal life while also hearing people in various industrial or commercial settings talk about their challenges working with APIs, I decided to figure out what the heck is happening in this weird world of application programming interfaces.
First up, APIs tend to get all the blame, even if the problem is somewhere else in a device or in the back-end cloud. Blaming an API is the ultimate in shooting the messenger, except when it isn’t. Because sometimes APIs are the problem. Back when APIs became popular in the web world, roughly 20 years ago, developers used them to share information between web sites. That expanded to include computing elements, such as those offered by Amazon Web Services. And now, they are expanding again — to connect devices to web sites and to computing services.
But while the web world has had years to work out the kinks when it comes to developing APIs, the hardware folks are relatively new to this. Kin Lane, a consultant who goes by the title API Evangelist, says the folks developing APIs for devices tend to break some of the API best practices because they aren’t thinking about how others — especially non-hardware experts — might use them.
One of the most common API usability crimes hardware folks commit is using jargon or inexplicable acronyms to describe the access they give and functions they offer. If you’re making an API to connect to a light bulb, for example, labeling parts of the API with a cryptic color value may not be as handy as labeling it blue or yellow-white light. Consider as well how it will be used, and for how long. An API has the potential to become infrastructure, which means others’ services or businesses may rely on it. If that’s the case, you should communicate with them when you change something, ideally before you change it. And you shouldn’t change it every few days, because it’s likely the developer in charge of handling your API is also in charge of many others.
Another API design sin is putting too much complexity into it. Prakash Khot, CTO at AthenaHealth, says that keeping things to a minimum and designing for modularity helps keep an API stable and usable. He also recommends that you consider error messages and feedback as part of the overall API design.
Too often when a request fails, the API designer hasn’t created a way to communicate what went wrong. This is frustrating for the end user and the company trying to work with the API. Also, in the case of an error message, Khot recommends thinking about the user’s privacy. For example, if a credit card number isn’t shared properly, don’t ship the number back and forth as part of the error.
Outside of basic design considerations, any business that wants to build an API (and really, that’s going to be every business in the IoT economy) should consider two other aspects. The first is politics and the second is business goals. When companies play politics is where end users might see the most frustration. An example would when Google decides to promote its own music service over that of Spotify on its Home device by using a subpar integration. It might also show up in cases where a competitor’s device can’t even access an API, or has rate limits that mean it’s going to perform more slowly or time out often. I anticipate this kind of API warfare between Nest and Amazon in the near future if they don’t patch up their spat.
When it comes to business goals, consideration can start with the information that you provide as part of your API, but might also be as direct as charging for access to an API or even paying others to use it. API calls do cost companies money since they have to provide servers to support information requests and developers to keep them up and running. However, they can also perform an invaluable scouting function for a company. For example, a company like Philips can see what cool things developers are doing with its lights if it looks at API data. It may then decide to buy a particular startup or hire a particular type of engineer.
Though I’ve dug deeper into the world of APIs, I still haven’t figured out why some of my individual devices behave so strangely. But I feel like I have discovered where the future of business contracts — and disputes — will be held in the new era of the internet of things. I can’t wait to learn more.