The future of healthcare is a digital version of your doctor

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Imagine waking up in the morning, checking your blood pressure and then calling up a hologram of your doctor and asking if the reading is normal. The digital version of your doctor would look like her and would respond to your question based on a combination of her own medical knowledge and your patient data.

This digital doctor already exists, according to Dr. Leslie Saxon, the head of the Center for Body Computing at USC. She described a “virtual human me” on The Internet of Things Podcast and shared her predictions for the future of healthcare in a connected world.

“We’ve created a virtual human me. I’m an expert in cardiac rhythm disorders. My virtual human is fueled by voice recognition AI. And she can provide unbelievable in depth content to you in a very deeply personal way,” Dr. Saxon says. “It’s very easy to create very realistic virtual humans. So, this looks like me … who, after you see me in a clinic visit, or if you’ve never met me and you’re living in Bangladesh, you can ask me questions about hearth rhythm disorders.”

Saxon says her answers are based on 30 years of experience answering patients questions, data-driven guideline data, and that it’s deeply personalized to you, because it’s an AI engine that can learn about you. The virtual Dr. Saxon already has 3,600 questions it understands and the goal is to deploy versions of it in all kinds of places.

Saxon views this sort of effort as the future of medicine, a future where 80% of medical care will be virtual and perhaps 20% will rely on in-person visits. And a key aspect to this vision is getting the knowledge of doctors to scale to the places where the patients are.

“What we’re seeing now is, we’re seeing this sort of merger of personally collected healthcare data with traditional data,” she says. “So, as you know, Apple announced this ability to support [electronic health records] on their platform, traditional electronic medical data as well as personal data. So, we’re now moving toward that virtual care vision. Telehealth is one small step, but it doesn’t really scale. All telehealth means is, you can get a doctor on a screen, a real doctor.”

This mixing of wearables to provide on-demand patient data, AI-powered virtual doctors, and real doctor visits only when the need is great will require better cybersecurity, a new regulatory framework, better privacy frameworks (Saxon says HIPPA is not enough) and peer-reviewed clinical data on what data from new sensors actually means. Saxon also cautions that patients will have to take a much more active role in their healthcare. There is no more paternalistic doctor that’s going to tell you what to do.

Patients will have to seek out information, evaluate it and play an active role in developing healthy habits.

“We’re not keeping patients down on the farm anymore” she says. “They have to understand their drugs. They have to engage in their care, they have to partner with us to make it better. Because I don’t want to spend a lot of time interrogating a patient like I’m a cop, “Did you do this or that,” and them feeling judged. I want them to come to me as a partner and I’m going to add my little expertise in.”

To learn more, listen to the whole interview here or just hit play below.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

Five automation ideas to improve your smart home lifestyle

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

We often talk or write about our smart homes without realizing that some folks don’t understand what it is that a smart home can actually do. That’s partially because there’s such a wide range of things we can automate or control by voice in our homes, that there’s no simple answer: The automations and smart devices I have are likely very different than the ones you have.

Still, there’s merit in laying out some examples for two reasons. First, we might actually be able to better answer the “what is a smart home” question with some practical, real world solutions. And second, sharing a few examples here will (hopefully!) inspire you to add to the list through our comments.

Collectively, we’ll all have nice group of ideas to make life easier in our smart homes. Keep in mind that these are less a set of step-by-step instructions and more of a conceptual list that you can implement or tweak based on your devices and software.

Don’t leave the door unlocked

I’ve mentioned this one before, but it’s and extremely effective solution to a problem I had: My young adult children sometimes come home at night after I’ve gone to bed. That’s not the problem though. The issue is that they don’t always remember to lock the door. Since I have a Z-Wave deadbolt installed in my front door, I decided to have my smart home hub automatically lock the door if it’s been open for five minutes. If you decide to do this, make sure you don’t lock yourself out though. My lock can be opened by my phone or watch, but it also has a keypad entry system.

Start the morning right

For a while I had my downstairs kitchen light go on at a specific time so that my wife wouldn’t walk down to a dark room. Scheduling this by the time is pretty easy but there are some days she sleeps in and some days she wakes up earlier. What she always does before going downstairs, however, is take a shower with the bathroom exhaust fan on. The last thing she does, without fail, is turn that fan off before heading downstairs.

A smart switch for the fan is a simple trigger event for home automation and once that switch hits the off position, the kitchen smart light — not to mention my coffee maker with a smart plug — can be enabled at exactly the right time, every time.

Keep an eye on the kids or pets as needed

We walk our dog so this doesn’t apply to me but Stacey has a small pet door in her home so the dog can go outside. With a webcam in a nearby window, she can keep an eye on the dog, but it may not make sense to have that camera on all the time. Adding a tilt sensor, similar to ones you find with smart garage controllers, to the pet door can trigger a webcam to power on. The same approach could be used for kids going out the back door: Add a magnetic sensor to the door and fire up the webcam to make sure playtime is safe.

Sundown is a great trigger for indoor lighting

One of the first automations I ever set up was to turn on the outdoor lights at or just before sundown. It’s easy to do and although the sun sets at a slightly different time every day, most smart home hub software can adjust for this. It took me months to realize it but sundown is a perfect event trigger for indoor lights too. Sure, you can keep some or all of the  house lights off until you get home and have them light up based on geofencing, a garage door opening or some other mechanism. But why not use the sun instead of some other hardware or device?

Trigger routines and scenes based on calendar events

Stacey had mentioned a routine / scene she created for doing yoga. When asking Alexa or Google Assistant to run the “Yoga” scene, her TV turns on, the downstairs temp lowers to 75 degrees, a Lutron fan and Philips Hue lights both turn on. That’s useful but I took it a step further, and you can too for your own scenes.

Try connecting IFTTT to your Google Calendar and a supported home automation shortcut or scene. I created a 9am Yoga event as a trigger on my calendar to fire up a similar routine. Lights were dimmed and relaxing music was fired up at nine on the dot, but sadly, I didn’t do the yoga part. Not only does this alleviate the voice command, but it makes more likely you’ll actually do the yoga, or whatever event you want to carve time out for.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

This startup could make predictive inventory possible

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Pulsa designed the dashboard for its sensor tracking app to be easy for anyone to use. Image courtesy of Pulsa.

The world of industrial IoT is rarely sexy, but it can be exciting. And right now I am really excited about a startup from San Francisco called Pulsa which is making sensors that measure pressure in gas tanks and the general weight of items in inventory. Formed in June 2016, the startup first created sensors that measure the pressure of gasses in those big cylinders sold to everyone from industrial manufacturers to restaurants that use CO2 in their soda machines.

The sensors communicate with a Pulsa-supplied gateway to let companies keep track of their gas inventory without having to send a person around to manually check it. Pulsa has also built predictive algorithms to anticipate the need for more gas and help companies keep it in stock without buying excess inventory, just in case.

Dave Wiens, the CEO and co-founder of Pulsa, says that the company’s pressure sensors for gas cylinders are already in customer trials and the weight sensors should be ready for trials by the end of April. Customers buy the sensors and receive an accompanying gateway. Today’s gateways use Wi-Fi, but Wiens says that later this year they will have a gateway with Wi-Fi and cellular backup. Those gateways will have NB-IoT, LTE-CatM 1, and 2G fallback, which should enable them to work anywhere in the world.

The sensor hardware costs $ 79 for the current pressure sensors which includes the gateway and cloud services for the app. Afterward, the app costs $ 48 a year per sensor.

As for the return on investment, one of Pulsa’s trial customers manages about 200 gas cylinders and has sensors on 50 of them. So far that customer estimates that it has saved $ 1,000 a month on easy-to-measure things, such as the gas in its tanks. Eliminating the need for an employee to check those levels at the end of each day saves the customer roughly $ 200 a month alone.

More impressive is how the ability to check the actual amount of gas left over and predict when it will run out saves the customer $ 1,500 every other month when a tank unexpectedly runs out in the middle of a manufacturing process.

Plus, the customer gets to use more of the gas. That’s because historically the company would toss a tank at the end of the day if it reached 200 psi due to worries it would run out overnight, costing precious production time. Ahead of weekends and holidays engineers would toss the cylinders if they had less than 300 or 400 psi. But the predictive algorithms that Pulsa provides let the managers of the process feel more comfortable using up more of the gas without worrying about running out. The customer estimates this saves it about $ 750 in what would otherwise be wasted gas.

The customer also plans to look at how to reduce its backup inventory by 10% since it will now have a more accurate sense of which gases it needs and when. In other words, the sensors on gas cylinders and the coming weight sensors will hopefully do for inventory what predictive maintenance is currently doing for production.

Wiens says that Pulsa is ready to sell the pressure sensors for gas cylinders and has received a lot of inquires about the weight sensor product. Potential customers for that product include grocery stores that want to track produce inventory, building management companies that want to measure the amount of cleaning products they have left, and more. Heck, I’d love to have something like it for my fridge so I know when my milk is running low.

But Pulsa isn’t aiming for the consumer market. So far, the enterprise and industrial market is plenty for it to handle.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

5 reasons the IoT needs smarts at the edge

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Industrial customers value edge computing for five main reasons.

Edge computing is hot right now, but not everyone understands why so many people are so focused on keeping their data in gateways or on on-premise services instead of sending it to the cloud. While it may seem like a huge shift to bring processing to the edge of the networks as opposed to sending all of the data to the cloud, for many IoT use cases, the cloud was never going to be a viable solution.

There are five primary reasons why the edge is winning when it comes to the internet of things. Three of them are technical limitations on cloud data transfers and two are dependent on business culture and the perception of cloud security. Let’s cover them.

1. Security – This is one of the favored reasons for big industrial companies. They don’t want to connect their processes to the internet because it exposes their operations to hackers and data breaches. For example, at the Honeywell User Group meeting I attended last year, most of the customers of Honeywell’s industrial automation products were loath to even put wireless infrastructure in their plants for fear of security breaches. Some of this is perception of risk, but thanks to a variety of hacks from Target’s breach — which began in its HVAC system and ended up compromising customers’ credit cards, raising concerns over hackers targeting infrastructure — this is a legitimate fear, given certain types of industrial processes.

2. IP – Related to the issue of security are concerns over proprietary data and intellectual property. High-quality sensors can be used to derive important information, such as a refinery process that counts as a trade secret. Jaganath Rao, SVP of IoT Strategy at Siemens, says that food companies are particularly sensitive to these sorts of issues. Imagine if the recipe for Coke could be inferred through its industrial data, for example.

3. Latency and resiliency – Latency is a measure of how fast information can travel over a network. Whether you are waiting for a Netflix movie to load or playing “Call of Duty,” latency matters. And when you translate digital bits into electrons or machinery, latency matters even more. In the home, for example, cloud-to-cloud services can lead to a second or two of delay when I’m turning on my lights using an app. That’s irritating. But in an industrial process, sending data from a machine to the cloud and then back again can cost a lot of money or even lives.

One of the more popular arguments for edge computing is autonomous cars. The idea is that a car going 60 miles an hour needs to be able to identify a threat and stop the car instantly, not wait a few seconds to make a round trip to the cloud. In the industrial world, a machine that is in danger of failing might only have a few seconds or a minute of warning. A sensor might pick up the new vibration signature that signals a failure and then send that to a local gateway for processing. The gateway needs to have the ability to recognize the failure and either alert someone or send back instructions to shut off the machine within milliseconds or seconds.

This also ties into resiliency. Network coverage can falter and the internet can go down. When that happens, cars, heavy industrial machinery, and manufacturing operations still need to work. Edge computing enables them to do that.

4. Bandwidth costs – Some connected sensors, such as cameras or aggregated sensors working in an engine, produce a lot of data. As in multiple gigabytes of data every hour or, in some cases, every minute. In those cases, sending all of that information to the cloud would take a long time and be prohibitively expensive. That’s why local image processing or using local analytics to detect patterns makes so much sense. Instead of sending terabytes of raw image data from a connected streetlight, a local gateway can process that data and then send the relevant information.

5. Autonomy – The problems of latency and resiliency bring us to the final reason the edge will flourish in the internet of things: autonomous decision-making can’t rely on the cloud. For many, the promise of connected plants or offices is that a large number of processes can become automated. If a machine can monitor itself and the process it’s performing, then it can eventually be programmed to take the right action when problems occur. So for example if a sensor detects a pressure buildup, it can release a valve further down the line to relieve that pressure. But once a process relies on a particular level of automation, it’s imperative that it can rely on that level to be enacted in time and all the time.

Most of these are fairly common sense, but what many in the traditional IT world miss is that when you start moving real-world machinery around instead of just bits, it’s no longer good enough to provide 99.99% reliability or millisecond latency. When challenges in the digital world meet the physical world they are magnified; real people’s lives or production processes are on the line, with real-world consequences.

It’s not to say that the cloud won’t pick up more IoT work over time, but right now, it’s a pretty scary proposition for a lot of IoT use cases.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

IoT news of the week for March 30, 2018

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Common sense IoT security principles drafted in the UK: Most of these may sound familiar to you since we’ve suggested them time and again on the podcast, but it’s nice to see some formalization. The UK’s Department for Digital, Culture, Media & Sport has drafted five best practices when it comes to securing and using IoT devices. These include not allowing devices to use default passwords, better communication of customer data usage, and a public contact for reporting issues. Perhaps best of all is the practice that requires device makers to keep software updated and, more importantly, to explain how long software updates will be made available. That sounds like our idea of an “expiration date” for IoT! The draft is open for public comment until April 25th. (National Law Review) — by Kevin Tofel

Nvidia’s deal with ARM is a big deal for AI at the edge: Nvidia announced that ARM will integrate its open sourced Nvidia Deep Learning Accelerator tech that helps run convolutional neural networks (CNNs) into ARM’s new Trillium architecture. ARM announced its Trillium architecture last month as a dedicated processor design for machine learning. The deal between the two is significant because it basically embeds Nvidia’s technology for CNNs in almost every potential chip used in the internet of things. CNNs are used for image recognition, and as this analyst notes, the deal means that Nvidia is giving this tech away for free because it believes the tech to solve CNNs is basically commodified. Nvidia would rather give it away to cement its market share. I also think it’s smart because the internet of things and edge devices will need a heck of a lot of image processing capability as noted in my story up above. All that video data isn’t going to make it to the cloud. (Forbes)

WannaCry is still hitting manufacturing plants: Last year, the WannaCry and the variant NotPetya ransomware took down DHL and Merck in a new style of attack. WannaCry was halted fairly quickly, but NotPetya and variants of the attack have continued to spread. Boeing was recently hit, and fears that it had affected the company’s production line and delivery of airplanes caused panic among its staff and customers. Buried in the story are two scary facts. One is that there’s a realization as to how many old, unpatched Windows machines there are in the manufacturing world, and the second is that WannaCry had recently hit at least two other manufacturing firms, taking their operations offline. (The Seattle Times)

A sensor made of gelatin? A startup named Mimica is making a food label out of gelatin that’s engineered to last as long as the particular food the label is attached to lasts. As the food spoils the label itself spoils, becoming bumpy to the touch, and in the process allowing someone to feel if their milk has soured or their two-year-old vial of sunflower oil has gone rancid. The original idea came out of a desire to build an expiration label for the visually impaired, but the idea resonated across a variety of food companies and now the founder is building a company around the tech. I love it. (The Spoon)

Who owns your outdoor camera data? After my colleague Kevin used a Nest cam to prove that a neighbor hit his parked car a few days back, I’ve been curious as to how much value that video would be to the police and the insurance companies, especially since Kevin’s neighbor denies she hit his car. However, there are a lot of other questions that arise when video doorbells and outdoor video cameras are positioned everywhere. One is how long a homeowner must keep the images; another is whether or not the police have a right to compel camera footage in the case of a crime. For more, check out the article. (CEPro)

IBM’s Watson could improve Siri: Apple and IBM announced yet another partnership, and although most of these deals focus on enterprise applications, the latest one could trickle down into Siri. At least that’s what my colleague Kevin thinks. His take is that using IBM Watson for pattern recognition and machine learning in the smart home might lead to a smarter Siri, one that doesn’t just handle your spoken commands better than she does today but a Siri that can anticipate your needs and actions. It sounds far-fetched, but when you follow his logic, you can see the potential. Semi-autonomous homes, anyone? (StaceyOnIoT— by Kevin Tofel

Your next business idea? As someone with dozens of connected gadgets that require an outlet, I’m constantly tripping over wires, strategically placing circuit breakers everywhere, and bemoaning the invasion of tech in my well-designed spaces. This post offers some innovative ways to redesign power cords and outlets. Someone please implement them. (Medium)

From the biased algorithms department: This article and its radio version take a look at an MIT researcher’s efforts to show how certain machine-trained facial recognition models develop biases that result in the computer not being able to identify dark-skinned faces, especially those of women. What’s eye-opening was how bad they were at it. The accuracy rate of identifying light-skinned men’s faces was 99% across the board. However, when identifying darker-skinned women, IBM’s error rate was almost 35%, Facebook’s was 34.5%, and Microsoft’s was 20.8%. So when someone says we’re at 99% accuracy when it comes to facial recognition, it’s probably a good idea to ask what their sample population looked like. (WBGH News)

Does your AI need a therapist? Computer science, law, and business are rapidly trying to come to grips with the idea that artificial intelligence based on neural networks and computers training themselves how to understand the world are incomprehensible to people. We’re also becoming aware that not only do we infuse our own human bias into these algorithms, but that these machines “think” in a way that is foreign to us. And since we can’t understand them, they can behave in ways we can’t anticipate. This is terrifying if you’re trying to use neural networks to build a self-driving car or figure out the best business process to use. One idea proposed here is some sort of role for people who try to understand the world from the machine’s point of view, what the article calls a psychotherapist for algorithms. (FastCompany)

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

Fibaro Wall Plug review: A smart, well-designed outlet that monitors energy use

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Fibaro Wall Plug with USB port (Credit: Fibaro)

I’ve long used Belkin’s $ 35 Wemo Insight Switch in my home, both for automation and energy monitoring. Now, Fibaro has a new, competing product called the Fibaro Wall Plug. It comes in two options: a $ 50 version and a $ 60 model that adds an integrated USB plug. I’ve been testing  a review unit of the latter and it’s a great, if not more expensive alternative, that has some automation limitations depending on which hub you use.

Yes, you’ll need a hub for the Wall Plug because it uses a Z-Wave radio for connectivity. In my testing, I connected the Plug to a SmartThings hub — technically an Nvidia Shield TV with Samsung SmartThings USB Link — but to use all of the Plug’s smart functionality, you’ll really need a Fibaro Home Center controller and the Fibaro mobile app. I’ll explain why in a bit.

From a design standpoint, the plug is elegant. I like the look of it and also the fact that it doesn’t cover up the second outlet in your wall, which some smart plugs can do. The rounded corners and small-ish size of the 2.32-inch plug look very modern and clean.

Note that since SmartThings doesn’t natively support the Fibaro Wall Plug, I had to install two custom handlers so that the SmartThings hub would recognize and report usage on both the main outlet and the USB port. It’s a pretty easy, cut-and-paste process, but worth noting.

Getting connected

Once that’s done, there’s not much else to the installation of the Fibaro Plug though. You simply triple click a button on the Plug to put it in pairing mode and use your hub to complete the process. I was able to pair it with my SmartThings hub in under a minute.

Once connected, you just plug in any standard electrical or USB device to the Fibaro unit. I used it to power the Raspberry Pi we set up for our IoT Podcast VM and also some other appliances, such as my Keurig coffee maker and June oven. I also added the Plug to both my Amazon Echo and Google Home accounts so I could turn the plug on or off with voice commands. The Fibaro Plug worked with both assistants for basic power commands.

Monitoring energy usage

One of the unique features of the Fibaro Plug is the LED ring on the front of it. The color of the ring changes to indicate how much power the plug is drawing based on seven unique colors including white, red, green, blue, yellow, cyan or magenta. The latter, for example, shows between 1350W and 1800W.

The June Oven uses 1675W, causing the Wall Plug LED ring to show magenta.

Initially, the LED ring didn’t light up when the plug was under a load. A quick reset of the Plug (hold the Plug button until the LED turns yellow, let go and quickly tap the button) fixed it. Plus, you can see that information in real time or view the historical use with the SmartThings app for both the main outlet and the USB port. The LED is configurable if you don’t want it on at all or if you want to customize the colors for different power usage levels.

The SmartThings app can also control the state of the Fibaro Plug, meaning with one tap on your phone, you can turn the Plug on or off. You can even do this when away from your home. Personally, I like to have it always on since most appliances have their own power switch. However, if you’re planning to use the Plug with a lamp, this is an easy way to turn that light on or off, even if you’re not using a smart bulb.

Automations

It’s tricky when it comes to automation and getting information from the Fibaro Plug, however. Yes, you can create automations that turn the plug on or off — helpful for lights — but that’s about it in the SmartThings world. And unless you use the Fibaro hub and app, you won’t get energy alerts. And although it would be nice to know if my refrigerator lost power, I can live without the notifications on energy usage. But my plans for automating the plug fell a little short when using SmartThings with it.

For example, I’d like to put one of these plugs in our master bathroom and have my wife use it with her hair dryer. Why? Because drying her hair is the last thing she does in the morning before she heads down to the kitchen. If I could automate the kitchen light based on the power draw of the hair dryer, she’d automatically enter a well-lit kitchen. The only way I can see to do this would be to use a Fibaro gateway and corresponding Fibaro app.

Regardless, the Fibaro Plug works well if you understand the hub and software limitations when using it with SmartThings, which could eventually change with an updated device handler. If you do have a Fibaro Home Center, you’ll get all of the impressive functionality the Plug offers.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

Zededa creates a new architecture for the edge

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Any connected computer could become a platform for Zededa’s distributed cloud.

Edge computing efforts are a dime a dozen nowadays, but after mocking the launch of Zededa a few weeks back for its buzzword-heavy press release without any technical details, I talked to the company’s CEO, Said Ouissal. He explained exactly what the startup’s vision is, and it’s a pretty novel way to build applications that could run on any gateway device.

The overarching goal of the startup is to help companies put software on edge devices that can be run securely, with little expertise needed from customers. Those devices might be machines aggregating sensor data or traffic cameras monitoring a street. Ouissal sees each of these devices as a set of infrastructure with common traits, which means Zededa developers can build applications that can span many different devices — and needs.

Zededa’s approach is akin to how developing software for the cloud works today, except that with the cloud the underlying physical hardware is relatively similar. In the IoT world, there are seemingly endless different types of computing devices — from a $ 6 Pi-based computer to a powerful Xeon gateway. There’s also a big question as to whether one needs to build vertical solutions for the industrial world.

A large contingent of industrial IoT entrepreneurs are betting that customers want to buy hardware, software, and cloud services that are vertically integrated so they don’t have to manage complex IT sourcing for something that could become vital to their business. Zededa thinks heterogeneous hardware and existing customer hardware can instead be transformed into something that handles a wide variety of applications. It basically wants to become the Amazon Web Services built on top of millions of connected IoT devices.

To do this, Zededa is creating a software package that combines a hypervisor and a new concept in computing called unikernals. Unikernals are packages of software that contain an application and only the underlying operating system required to run that application. So if the application doesn’t need a file system, that gets jettisoned. The end result is very simple blobs of code (I’d call it a container, but that means something different).

A container, such as those offered by Docker or Kubernetes, provide everything a piece of software needs to run such as the OS, runtime, libraries etc. It’s more flexible than a virtual machine created by a hypervisor, but has more overheard than a unikernal.

The hypervisor is important as well. While newer IoT implementations might view hypervisors as a relic of the server era, there are millions of older connected computers running Windows operating systems that can’t be shoved in a container. For those, you need a hypervisor, says Ouissal. He’s not alone. Last week, The Linux Foundation released an open-source hypervisor designed for the IoT with contributions from Intel and others. It’s called ACRN.

These elements communicate exchange data with the machines they are on and also send information back to a cloud operated by Zededa. The blobs of code and the hypervisor help ensure that the applications that are accessing the edge device stay secure even if the device is tampered with, while the cloud governs the way applications run on the extended hardware devices.

Some of this approach reminds me of what Resin.io is doing with its ability to run containers at the edge, allowing customers to manage applications across their fleets of IoT devices in a way that’s closer to the way they can manage their applications across a cloud infrastructure. But a lot of this also feels very novel, such as the adoption of unikernals that allow software to run in constrained environments.

I’ve spent years trying to define an edge computing stack, and it shifts depending on who I talk to. The one constant, though, is that it’s trying to use existing technology to solve what feels like a very new computing paradigm. And I’m not using the word “paradigm” as jargon. Creating a trusted, secure, auditable, and manageable way to deploy software across millions of nodes is a very different challenge for computing. It really is a new paradigm.

I’m not sure if Zededa’s software is the right path forward, but when it launches later this year, I can’t wait to see how people build on it and with it.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

Bulbs are broken. Services will win in lighting.

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

In the warehouse of the future, the lights are watching you. And communicating with your equipment. Image courtesy of OSRAM.

One of the big promises of what people like to call digital transformation that’s occurring with connected everything is a change in business models. Companies can go from selling things and having a one-time relationship with a client to selling a service and having an ongoing relationship. In the lighting world, this has been happening for years.

Indeed, if you want to see how connectivity can reshape a business, the lighting industry offers a great case study. As soon as digitization hit in the form of LEDs, the industry found itself more concerned with semiconductors — the source of LED light — than it was with old-school chemicals and filaments. But it also found itself with a problem. LEDs last a lot longer than light bulbs, so instead of selling a product that’s replaced every year or two, the industry suddenly starting selling one that lasts some 20 years.

That’s a big shift. Meanwhile, startups saw the transition to energy-saving LEDs as an opportunity to create an entirely new form of network inside buildings by anchoring them with these new LED systems. Digital Lumens, Enlighted, and others were subsequently formed to replace old lighting with LEDs that also measure elements like temperature and motion. From there, these startups were able to lower energy costs, but they were also able to start offering consulting services aimed at helping businesses make better use of their space or even track inventory.

That’s paying off now for OSRAM, a German lighting maker that purchased Digital Lumens last year and has now integrated its technology as well as other OSRAM tech into a new lighting platform called Lightelligence.

OSRAM has long made bulbs; with Lightelligence, it is now starting to sell lighting as a service. A warehouse customer in Europe currently contracts with OSRAM to provide 300 lux (a measure of brightness) in its buildings when needed. Thorton Thorsten Mueller, head of innovations for OSRAM, explained that this is possible because the fixtures containing the lights have computers that can measure and process 40 different parameters about the light itself and the environment it’s in.

OSRAM also has enabled other technologies that work with its lighting software to understand the room and what’s going on inside of it. For example, in retail environments, the platform brings in data from cameras or Bluetooth beacons that can indicate where people congregate in a store or where there are choke points. It’s worth noting that in the camera example, Mueller says the images aren’t used, only the data. So the platform recognizes that four people might be clustered near a clothing rack, but it won’t know who they are. In Europe, where the government regulates privacy, this is a necessary precaution.

In industrial environments, LiFi can be used to help guide robots or equipment around a plant or warehouse. LiFi is a way of transmitting data using light over short distances. It does require a transmitter and receiver, so it’s only practical for environments where the owner controls both the physical lighting infrastructure and the equipment they want to track.

The ultimate goal is to use different technologies and the sensors embedded in the lighting system to offer a variety of services. Light is an obvious one, as is location tracking. But there are also cool applications, such as those mentioned above, or even using the lights to convey relevant information.

For example, Mueller says that in a warehouse environment one customer is using the system to track forklifts and help them plot the best route to get inventory. But it also can blink the lights if it sees two forklift operators on a collision course. These sorts of services provide more value than selling a light bulb every few years.

However, in the quest to turn everything into a service, there are concerns. One is lock-in. Because the best services will be those that are easiest to use, OSRAM allows other companies’ bulbs in their fixtures and lets people write applications for its platforms. It shares APIs and offers cloud-to-cloud integration so your lighting platform can talk to your building security platform or the elevator platform. That way, customers aren’t concerned about obvious signs of lock-in.

Additionally, in revamping its business toward more services, OSRAM’s customer changes from the facilities folks to the C-level executives and/or plant managers, who are thinking holistically about the success of the business.

Finally, in the services world, companies like OSRAM will face a host of new competitors that are also trying to sell elements of their business as a service. Lighting can provide interesting sources of data, such as where people are in a building, but other sensors or platforms could offer the same data. Or perhaps a far-thinking utility might offer a package of comfort that includes lighting, HVAC and warm water. A buyer might decide to go for something like that instead.

GE’s Current unit basically sells energy management, which includes lighting among some other elements such as power for operations. So even as the lighting industry undergoes its digital transformation, there are still a lot of questions left to answer and a really unsettled playing field.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

IoT news of the week for the week of March 23, 2018

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

A lights-out construction site? One of the cited goals of better AI and connectivity is factories that can operate without people inside. Such “lights out” operations won’t need heating, cooling or — you guessed it — lights. But a Californian startup is also thinking about how to bring this more autonomous viewpoint to the construction industry using drones for mapping and AI for guiding vehicles around a site. (MIT Technology Review)

I wrote about the internet of things breaking Wi-Fi: As part of my monthly column in IEEE Spectrum, I wrote about the challenges facing Wi-Fi as people ask more of the technology. The article covers challenges with mesh systems and some of the recent efforts to make provisioning of devices easier — but also proprietary. (IEEE Spectrum)

Connected pest control: Comcast’s new LoRa network, called MachineQ, shared some of its customer wins this week and one in particular caught my eye. The customer was Victor, which makes rodent traps. The reason I paid attention is because this is the third example I’ve heard in the last few months of a connected mousetrap. This is actually a real business opportunity because it’s a product where knowing the status of the trap in the small window of time when you have caught a critter really matters. Instead of checking the trap every day or forgetting and waiting for your nose to tell you, when it’s full you get a notice so you can address the problem then and there. Imagine how useful this is in a spread-out warehouse or even a restaurant kitchen. Pests are a big problem and IoT might make them less onerous. (Comcast)

A deep dive on Windows IoT Core and more: Everyone interested in the big IoT dollars is building in hooks to get data from a device to the cloud easily. This week a smaller player, Resin.io, announced a board that lets you tie directly into its software, while a few months back Amazon brought in the creator of FreeRTOS with plans to let people get devices running the real-time operating system to easily connect to AWS. This article makes the case that Microsoft’s Windows IoT Core is an OS that offers a similar link between the device and the cloud. The article also covers the heavier Windows IoT Enterprise and sheds light on what Microsoft is doing in this area. Since Azure is the industrial IoT’s favorite cloud, I suggest paying attention. (Network World)

For the smart home, think automations, not routines: Over at my site, Kevin Tofel writes about the limits of voice control even as Amazon and Google improve their platforms with new routines. (StaceyonIoT)

Throwing good money after bad? Gartner expects worldwide spending on IoT security to reach $ 1.5 billion in 2018, up from $ 1.2 billion the year before. It said that helping drive this spending is the fact that one in five companies have reported an IoT security breach in the last three years. That number seems low to me, but maybe the rest of those having security troubles merely don’t know they’ve been pwned. (The Economic Times)

So much on Siri: Last week, The Information did an excellent teardown of how Apple’s Siri came to be and how fragmented it has become. This has hurt it in the smart home market, but ZDNet is making the case that Siri’s current flaws may not hurt Apple forever. It argues that Apple’s tight control could enable Siri to develop a truly conversational interface that will make efforts like Alexa seem artificial and forced. We’ll see. (The InformationZDNet)

Walmart’s Handy partnership doesn’t go far enough for the smart home: I used to think that the limiting factor for most smart home devices was the challenge of installation, but I’ve come to the conclusion that the real issue is pulling everything together into something a consumer can understand and value. And for that, you need a pro or a service like Amazon’s experts or Best Buy’s Geek Squad. Handy offers basic installation, but it doesn’t go quite far enough. That said, if you just want a new video doorbell then being able to buy it at Walmart and sign up at the store for an install does work. (Walmart)

Want to sponsor this newsletter and the IoT Podcast? Click here to request a media kit.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app

How IBM’s Watson could bring more smart home intelligence to Siri and Homekit

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Earlier this week, IBM announced a new partnership with Apple, explaining how it would be adding IBM Watson Services to Core ML. Since Watson has already proved its prowess on Jeopardy, most folks know what Watson is. Core ML is probably less familiar: It’s Apple’s machine learning framework for the company’s software platforms. Specifically, Apple says that Core ML can be used with Siri, Camera and QuickType.

How Watson works

That reference of Siri jumps out at me. Granted, the IBM-Apple news is really geared towards building apps for enterprises — Apple and IBM have been partners in this area for a few years now — but I’m thinking ahead on other areas where Watson could could benefit Apple products. And Siri surely needs help, especially inside the Apple HomePod.

How so? Well, let’s step back a minute and see how Watson works today.

This video provides a fantastic explanation but if I had to summarize it, here’s how I see it. Watson ingests large amounts of unstructured data, most of which today is written information. After analyzing that data for patterns, Watson attempts to structure it to understand both the content and intent of any actions taken upon that data.

This is far more advanced than simply scanning a never-ending stream of question-answer pairs because not every question is asked the same way and that can change the meaning of the question or analysis. There’s certainly more to Watson than my limited interpretation, but these are the parts most relevant to my thought process.

Data in the smart home: Context and intent

So what if the unstructured data was human behavior in a smart home? Theoretically, Watson could determine both the context and the intent of users of that home and through pattern recognition, possibly anticipate the needs of people in the home from such insights. This is the autonomous level of smart homes that I alluded to last week when discussing routines and automations.

To be more specific, Watson could help make sense of all of the actions we take in, around and near our homes: When we generally wake, leave for work, what we cook and when, who comes and goes, when do we sit down to relax and what do we typically do during that time. For a home to be semi-autonomous, certain patterns need to be recognized from these actions. And those patterns can be combined with already available verification data such as GPS location, network traffic from Netflix or music from an online streaming service.

At that point, a digital assistant such as Siri can begin to anticipate things and make insightful suggestions without any programming or user configuration; two items used today for routines and automation.

For example, I typically retire to the home office at some point after dinner but I don’t do work. Instead, I turn a light on to read a book or watch TV and I may play some low-volume music. Now imagine if Siri knew that, thanks to Watson.

I might head upstairs to the office and find the light already turned on to my preferred brightness. Siri could proactively ask if I wanted to catch up on the show I most recently watched on Netflix. Perhaps I respond and say, “No thanks, I’m going to read for a while.” Maybe Siri prompts to see if I want music that’s tailored for light background noise while I read. You see where I’m going.

Google is doing this with Docs already

If it sounds impossible that such patterns could be detected or useful, think about Google Drive. Using its own machine learning, Google knows when I typically return to specific documents and it highlights them at the appropriate time. Think context and intent here.

A perfect example is when Stacey and I collaborate on the IoT Podcast show notes. The day and time of that effort varies but I’d say that 90% of the time, I open up Google Drive to add topics for the next show, the spreadsheet appears above my Drive contents in the “Quick Access” area. In fact, under the document, it says, “You usually open this Sheet around this time.” It’s a simple example of pattern recognition, but it’s also a powerful one.

If Google can do this with Drive documents and Watson can do this with unstructured, written data, it’s just a matter of doing the same thing with a different type of data: Objects and their actions in the smart home. There’s no guarantee that Apple is working with IBM on this to make Siri a smarter digital assistant in the home, but if they’re not, I think they should be.

Stacey on IoT | Internet of Things news and analysis

Cash For Apps: Make money with android app