Live, die, repeat: The security shortcuts endangering IoT device adoption

IoT devices are repeatedly exhibiting the same flaws creating a massive vulnerable attack surface which will inevitably lead to more major attacks. We’ve already seen DDoS attacks increase 91 percent over the course of 2017 due to vulnerable deployed devices, yet estimates suggest only 9 percent of IoT vendor budgets are spent on security. This pitiful investment is leading to shortcuts and a ‘live, die, repeat’ attitude to development that spells disaster for the user and the long-term viability of the IoT seedbed.  

So what are these common issues that are cropping up time and again? Security research reveals specific issues across all aspects of IoT design, from access and connectivity, hardware and firmware, and update mechanisms. 

Access all areas

In terms of access, vendors often fail to implement ‘least privilege’ in the permissions on the device. Without this an attacker can quickly gain root access to the entire system. The root user log-in should require a password and this should not be set by default or hardcoded in as this could mean that one vulnerability, such as having telnet enabled, could provide root access.

Encryption is also another common failing, without which the attacker can recover keys, certificates, hashes, and passwords and again gain control. Using System on a Chip (SoC) to store encryption keys or sensitive information on the device using Trusted Platform Module (TPM) is the preferred option. A secure boot should also be implemented as without this the SoC cannot check the integrity of the bootloader, and the bootloader cannot check the integrity of the firmware. This can allow an attacker to modify firmware of the device, either by subverting controls on the firmware update process, or through physical access to devices.

Just because the device is encrypted doesn’t mean it is protected, however. Poor implementation of encryption such as encryption without MAC, hardcoded IV and weak key generation can all lead to compromise and steer clear of home-grown cryptography. Ensure encryption is extended to include firmware. Attacks can see malicious firmware deployed to devices so sign and validate the signature during updates and ensure that the HTTPS connection is secure, with SSL certificates validated.

Wireless weaknesses

Connectivity is also a major sticking point. There’s a tendency to assume that a local connection over a WiFi access point or Bluetooth Low Energy (BLE) confers some protection because of the limited range of the signal but this can still lead to drive-by attacks. Typically wireless communication is used to pass the user’s SSID and pre-shared key (PSK) to the device, often in plain text, which the attacker can then capture and use.

Redundant functions often provide a convenient entrance point for the attacker. Developers favour off-the-shelf toolkit such as BusyBox, described as the Swiss army knife of embedded Linux, but it’s important to minimise the use of these functions. Similarly, open ports or redundant web user interfaces should be disabled rather than left in place. Devices that ship with serial ports enabled are particularly vulnerable. This can allow the bootloader, a login prompt, or an unprotected shell to be accessed. Such debug headers may well be present for troubleshooting during the development and programming stages but should be disabled in the end consumer product, an issue often overlooked.

Exploiting buffer overflows is another prime way for the attacker to seize control of the device once it’s on the network but it’s possible to prevent this by using compile time hardening in the form of PIE, NX, ASLR, RELRO, Stack Canaries or Fortify. These are often included in embedded systems but can affect performance and battery life so some experimentation will be required. Consider also whether unsafe functions associated with buffer overflow are used ie strcpy, sprint, and gets, used in binaries on the system.

Keep it current

Is the software up to date? This sounds obvious but lots of devices have Certificate Authority (CA) bundles predating 2012, kernels dating back ten years, old versions of Busybox or even web server connections last accessed in 2005. Old CAs may have already been compromised but are still used by developers because it’s generally easier to leave them in place and simply switch off certificate validation. Unfortunately, this can then expose the device to man in the middle attacks. Check the certificate is correctly signed by a valid certificate authority, check that it matches the server name, and check that it hasn’t expired.

If IoT vendors take the necessary steps to address these common security failings these devices will no longer be so easy to hijack and to subvert. A failure to do so will inevitably lead to yet more behemoth botnets, as well as the emergence of malicious firmware updates and ransomware attacks, which could potentially threaten the viability of the IoT itself.

iottechnews.com: Latest from the homepage

Google app v7.21 beta adds image donations to Lens, prepares for shopping on smart displays, making the Assistant repeat after you, and more [APK Teardown]

There’s a new beta update to the Google app making the rounds. Like so many others, this one doesn’t bring a lot of changes when it is first installed, but there are plenty of bigger things under the surface waiting to break out. While you can begin donating images to Google Lens today, the future also promises to have smart displays with shopping and YouTube suggestions, more places to set your default output devices for Assistant, and more.

Read More

Google app v7.21 beta adds image donations to Lens, prepares for shopping on smart displays, making the Assistant repeat after you, and more [APK Teardown] was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

CNN Claims Apple Is to Be Blamed for the Repeat Apple News Notification Bug

Users of Apple News were in for an unpleasant Tuesday, as the service sent out several notifications for the same news article from CNN. Since then, several users have been quick to point the blame at CNN. Well, the company has now defended itself in a tweet, claiming that the article was only sent once from its servers, thus directly pointing blame at Apple. Continue reading
iPhone Hacks | #1 iPhone, iPad, iOS Blog

CNN Blames Apple for Apple News Bug That Caused Repeat Notifications

Earlier this afternoon, a bug with the Apple News app caused notifications for a single CNN news story to be sent out to iPhone and iPad users over and over again.

The issue, which lasted for approximately 15 minutes, appears to have impacted all Apple News subscribers who had alerts turned on for CNN based on a slew of complaints that popped up on reddit, Twitter, and the MacRumors forums.


It wasn’t clear if the problem was with CNN or the Apple News app, but on Twitter, CNN claims it was the latter. According to the news organization, CNN only sent a single notification, and the company is working with Apple to identify the problem.


Customers who were affected by the repeated notifications received somewhere around a hundred notifications, and the notifications in question were interrupting normal device operation. It appears that the issue centered around a single CNN news story, but we’ve also seen reports that some notifications from Fox News also repeated.


The only fix for the issue at the time was to turn off Apple News notifications, but the problem was resolved by Apple quickly and customers who did turn off their notifications due to the CNN alert bug can now safely re-enable them.

Discuss this article in our forums


MacRumors: Mac News and Rumors – All Stories

Algorithms Are No Better at Predicting Repeat Offenders Than Inexperienced Humans

Predicting Recidivism

Recidivism is the likelihood of a person convicted of a crime to offend again. Currently, this rate is determined by predictive algorithms. The outcome can affect everything from sentencing decisions to whether or not a person receives parole.

To determine how accurate these algorithms actually are in practice, a team led by Dartmouth College researchers Julia Dressel and Hany Farid conducted a study of a widely-used commercial risk assessment software known as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). The software determines whether or not a person will re-offend within two years following their conviction.

The study revealed that COMPAS is no more accurate than a group of volunteers with no criminal justice experience at predicting recidivism rates. Dressel and Farid crowdsourced a list of volunteers from a website, then randomly assigned them small lists of defendants. The volunteers were told each defendant’s sex, age, and previous criminal history then asked to predict whether they would re-offend within the next two years.

The accuracy of the human volunteer’s predictions included a mean of 62.1 percent and a median of 64.0 percent — very close to COMPAS’ accuracy, which is 65.2 percent.

Additionally, researchers found that even though COMPAS has 137 features, linear predictors with just two features (the defendant’s age and their number of previous convictions) worked just as well for predicting recidivism rates.

What Are Algorithms?
Click to View Full Infographic

The Problem of Bias

One area of concern for the team was the potential for algorithmic bias. In their study, both human volunteers and COMPAS exhibited similar false positive rates when predicting recidivism for black defendants — even though they didn’t know the defendant’s race when they were making their predictions. The false positive rate for black defendants was 37 percent, whereas it was 27 percent for white defendants. These rates were fairly close to those from COMPAS: 40 percent for black defendants and 25 percent for white defendants.

In the paper’s discussion, the team pointed out that “differences in the arrest rate of black and white defendants complicate the direct comparison of false-positive and false-negative rates across race.” This is backed up by NAACP data which, for example, has found that “African Americans and whites use drugs at similar rates, but the imprisonment rate of African Americans for drug charges is almost 6 times that of whites.”

The authors noted that even though a person’s race was not explicitly stated, certain aspects of the data could potentially correlate to race, leading to disparities in the results. In fact, when the team repeated the study with new participants and did provide racial data, the results were about the same. The team concluded that “the exclusion of race does not necessarily lead to the elimination of racial disparities in human recidivism prediction.”

Image Credit: AlexVan / Creative Commons

Repeated Results

COMPAS has been used to evaluate over 1 million people since it was developed in 1998 (though its recidivism prediction component wasn’t included until 2000). With that context in mind, the study’s findings — that a group of untrained volunteers with little to no experience in criminal justice perform on par with the algorithm — were alarming.

The obvious conclusion would be that the predictive algorithm is simply not sophisticated enough and is long overdue to be updated. However, when the team was ready to validate their findings, they trained a more powerful nonlinear support vector machine (NL-SVM) with the same data. When it produced very similar results, the team faced backlash, as it was assumed they had trained the new algorithm too closely to the data.

Dressel and Farid said they specifically trained the algorithm on 80 percent of the data, then ran their tests on the remaining 20 percent in order to avoid so-called “over-fitting” — when an algorithm’s accuracy is affected because it’s become too familiar with the data.

Predictive Algorithms

The researchers concluded that perhaps the data in question is not linearly separable, which could mean that predictive algorithms, no matter how sophisticated, are simply not an effective method for predicting recidivism. Considering that defendants’ futures hang in the balance, the team at Dartmouth asserted that the use of such algorithms to make these determinations should be carefully considered.

As they stated in the study’s discussion, the results of their study show that to rely on an algorithm for that assessment is no different than putting the decision “in the hands of random people who respond to an online survey because, in the end, the results from these two approaches appear to be indistinguishable.”

“Imagine you’re a judge, and you have a commercial piece of software that says we have big data, and it says this person is high risk,” Farid told Wired, “Now imagine I tell you I asked 10 people online the same question, and this is what they said. You’d weigh those things differently.”

Predictive algorithms aren’t just used in the criminal justice system. In fact, we encounter them every day: from products advertised to us online to music recommendations on streaming services. But an ad popping up in our newsfeed is of far less consequence than the decision to convict someone of a crime.

The post Algorithms Are No Better at Predicting Repeat Offenders Than Inexperienced Humans appeared first on Futurism.

Futurism