Amobee Unveils Custom Bid Algorithms for Marketers

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

MMW learned ahead of the weekend that Amobee — a leading global digital marketing technology company serving brands and agencies — has rolled out enhancements to its platform that allow buyers to deploy custom data sets into Amobee’s bid modeling system. The feature launch empowers advertisers and their agencies to tailor their bid strategies against the data signals that matter most to them.

As an initial proof of concept, Amobee partnered with TruSignal, Inc., a leader in predictive score marketing, to ingest its custom-built, predictive, people-based scores into the existing Amobee bidder and influence the real-time bids from the platform. Advertisers can leverage their own data models, and/or take advantage of TruSignal’s custom-built predictive scores, as they are fully integrated into the platform.

Amobee’s platform integration works by applying data from any provider as a variable that triggers real-time changes in an advertiser’s bid. By leveraging the power of the Amobee platform’s ability to integrate data, marketers can better fuel omnichannel engagement through cross-channel, programmatic media campaigns. With this unified offering, leading brands and agencies can plan and buy media for specific audiences in a more integrated way to maximize their investments across desktop, mobile, video and social.

“At Amobee, we are constantly seeking ways to maximize the results of client campaigns through enhanced decision making capabilities within the platform,” said Maxwell Knight, Amobee’s Vice President of Analytics Services. “By leveraging the power of outside data and custom audience segments, we provide brands and agencies a highly customized solution that multiplies their ability to reach the right audience at the right place and right time, across every digital channel, on any device.”

Simplifying the delivery of advertising across all channels and screens, including video, display, mobile, and social, the platform includes the Amobee DSP, Amobee DMP, Brand Intelligence and DataMine analytics, which converts raw data into custom audience and campaign insights, empowering marketers to make more informed decisions.

To learn more, click here.

The post Amobee Unveils Custom Bid Algorithms for Marketers appeared first on Mobile Marketing Watch.


Mobile Marketing Watch

Cash For Apps: Make money with android app

Petasense bets on hardware over algorithms for industrial IoT

Petasense co-founders Abhinav Khushraj (left) and Arun Santhebennur (right). Image courtesy of Petasense.

Since this week’s theme so far is data, let’s keep it going with a profile on Petasense, a startup that offers predictive analytics to industrial clients. Petasense was formed in 2014 with a plan to stop downtime at factories by improving plant owners’ ability to understand when their machines would fail. It built a Wi-Fi-connected vibration sensor that collects data from each machine and sends it up to the cloud for analysis.

The resulting data gets sent back in the form of a health score to plant operators. What Petasense founders discovered was that downtime isn’t why companies were interested in the service. Instead, they wanted to use it to avoid scheduled maintenance on equipment that didn’t actually need it. Now plant operators have the ability to set a customized maintenance schedule for each machine, avoiding the downtime and cost that comes with servicing a machine that doesn’t yet need it.

What Petasense is doing isn’t new. GE has been touting its ability to take in data to predict failures for the last five or six years. Startups such as Augury also offer similar services, albeit by analyzing the sounds that machines make as opposed to their direct vibration. Really, the sense is that anyone with a fancy algorithm and access to data can come up with some way to predict the health of a given machine.

But Abhinav Khushraj, one of Petasense’s cofounders, begs to differ. He says that Petasense is different because fancy algorithms are one thing, but access to data is the essential thing. Petasense built its own vibration sensor so it could get clean data to populate its analytics efforts. Controlling the sensor gives Petasense the competitive edge, says Khushraj.

I want to believe this. I can see the value in having clean data and the ability to understand the specifics of the hardware collecting that data. However, I also know that new ways of getting data come along all the time with different incentives to use them. Petasense does make it incredibly easy to buy and deploy its vibration sensor, which goes a long way to assuaging my doubts about its customers finding a new source of vibration data.

The sensor costs between $ 400 and $ 600 and gets glued onto the equipment with industrial epoxy. The battery lasts two years and transmits data every three hours. If it’s as simple as getting someone to walk around sticking a sensor onto every piece of equipment, then that’s not a difficult ask. This assumes it’s easy to put the device on a corporate network. Because it uses Wi-Fi, things could get tricky.

Once the sensor is transmitting data, companies pay about $ 10 per month, per device, for the analytics. The whole service replaces what was typically one person, who would come around and collect vibration data from gear every month or so, and the specialist that person sent the data to, who would then use that reading to see if there was a problem.

Obviously the sensor replaces those two people, but it also collects a lot more information than was previously possible, which presumably leads to better results. Petasense has customers in the utilities industry and customers who use it to monitor HVAC equipment in buildings.

Stacey on IoT | Internet of Things news and analysis

Algorithms Are No Better at Predicting Repeat Offenders Than Inexperienced Humans

Predicting Recidivism

Recidivism is the likelihood of a person convicted of a crime to offend again. Currently, this rate is determined by predictive algorithms. The outcome can affect everything from sentencing decisions to whether or not a person receives parole.

To determine how accurate these algorithms actually are in practice, a team led by Dartmouth College researchers Julia Dressel and Hany Farid conducted a study of a widely-used commercial risk assessment software known as Correctional Offender Management Profiling for Alternative Sanctions (COMPAS). The software determines whether or not a person will re-offend within two years following their conviction.

The study revealed that COMPAS is no more accurate than a group of volunteers with no criminal justice experience at predicting recidivism rates. Dressel and Farid crowdsourced a list of volunteers from a website, then randomly assigned them small lists of defendants. The volunteers were told each defendant’s sex, age, and previous criminal history then asked to predict whether they would re-offend within the next two years.

The accuracy of the human volunteer’s predictions included a mean of 62.1 percent and a median of 64.0 percent — very close to COMPAS’ accuracy, which is 65.2 percent.

Additionally, researchers found that even though COMPAS has 137 features, linear predictors with just two features (the defendant’s age and their number of previous convictions) worked just as well for predicting recidivism rates.

What Are Algorithms?
Click to View Full Infographic

The Problem of Bias

One area of concern for the team was the potential for algorithmic bias. In their study, both human volunteers and COMPAS exhibited similar false positive rates when predicting recidivism for black defendants — even though they didn’t know the defendant’s race when they were making their predictions. The false positive rate for black defendants was 37 percent, whereas it was 27 percent for white defendants. These rates were fairly close to those from COMPAS: 40 percent for black defendants and 25 percent for white defendants.

In the paper’s discussion, the team pointed out that “differences in the arrest rate of black and white defendants complicate the direct comparison of false-positive and false-negative rates across race.” This is backed up by NAACP data which, for example, has found that “African Americans and whites use drugs at similar rates, but the imprisonment rate of African Americans for drug charges is almost 6 times that of whites.”

The authors noted that even though a person’s race was not explicitly stated, certain aspects of the data could potentially correlate to race, leading to disparities in the results. In fact, when the team repeated the study with new participants and did provide racial data, the results were about the same. The team concluded that “the exclusion of race does not necessarily lead to the elimination of racial disparities in human recidivism prediction.”

Image Credit: AlexVan / Creative Commons

Repeated Results

COMPAS has been used to evaluate over 1 million people since it was developed in 1998 (though its recidivism prediction component wasn’t included until 2000). With that context in mind, the study’s findings — that a group of untrained volunteers with little to no experience in criminal justice perform on par with the algorithm — were alarming.

The obvious conclusion would be that the predictive algorithm is simply not sophisticated enough and is long overdue to be updated. However, when the team was ready to validate their findings, they trained a more powerful nonlinear support vector machine (NL-SVM) with the same data. When it produced very similar results, the team faced backlash, as it was assumed they had trained the new algorithm too closely to the data.

Dressel and Farid said they specifically trained the algorithm on 80 percent of the data, then ran their tests on the remaining 20 percent in order to avoid so-called “over-fitting” — when an algorithm’s accuracy is affected because it’s become too familiar with the data.

Predictive Algorithms

The researchers concluded that perhaps the data in question is not linearly separable, which could mean that predictive algorithms, no matter how sophisticated, are simply not an effective method for predicting recidivism. Considering that defendants’ futures hang in the balance, the team at Dartmouth asserted that the use of such algorithms to make these determinations should be carefully considered.

As they stated in the study’s discussion, the results of their study show that to rely on an algorithm for that assessment is no different than putting the decision “in the hands of random people who respond to an online survey because, in the end, the results from these two approaches appear to be indistinguishable.”

“Imagine you’re a judge, and you have a commercial piece of software that says we have big data, and it says this person is high risk,” Farid told Wired, “Now imagine I tell you I asked 10 people online the same question, and this is what they said. You’d weigh those things differently.”

Predictive algorithms aren’t just used in the criminal justice system. In fact, we encounter them every day: from products advertised to us online to music recommendations on streaming services. But an ad popping up in our newsfeed is of far less consequence than the decision to convict someone of a crime.

The post Algorithms Are No Better at Predicting Repeat Offenders Than Inexperienced Humans appeared first on Futurism.

Futurism

Logan Paul forced YouTube to admit humans are better than algorithms

YouTube is no stranger to controversy. Many of its top stars have been in hot water recently: From PewDiePie making racists remarks, to a "family" channel with abusive kid pranks, the company's been under fire for not keeping a closer eye on the the…
Engadget RSS Feed

Algorithms transform Chicago scenes into trippy lobby art

Office lobbies are prime spots for corporations to make statements about their values and taste, yet "lobby art" is usually a shorthand way of saying "insipid crap." However, an art installation studio called ESI Designs has given a Chicago office bu…
Engadget RSS Feed

Facebook’s chief security officer let loose at critics on Twitter over the company’s algorithms

Stamos is a key player in Facebook’s effort to understand Russian election meddling.

Facebook executives don’t usually say much publicly, and when they do, it’s usually measured and approved by the company’s public relations team.

Today was a little different. Facebook’s chief security officer, Alex Stamos, took to Twitter to deliver an unusually raw tweetstorm defending the company’s software algorithms against critics who believe Facebook needs more oversight.

Facebook uses algorithms to determine everything from what you see and don’t see in News Feed, to finding and removing other content like hate speech and violent threats. The company has been criticized in the past for using these algorithms — and not humans — to monitor its service for things like abuse, violent threats, and misinformation.

The algorithms can be fooled or gamed, and part of the criticism is that Facebook and other tech companies don’t always seem to appreciate that algorithms have biases, too.

Stamos says it’s hard to understand from the outside.

“Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks,” Stamos tweeted. “My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.”

Stamos’s thread is all the more interesting given his current role inside the company. As chief security officer, he’s spearheading the company’s investigation into how Kremlin-tied Facebook accounts may have used the service to spread misinformation during last year’s U.S. presidential campaign.

The irony in Stamos’s suggestion, of course, is that most Silicon Valley tech companies are notorious for controlling their own message. This means individual employees rarely speak to the press, and when they do, it’s usually to deliver a bunch of prepared statements. Companies sometimes fire employees who speak to journalists without permission, and Facebook executives are particularly tight-lipped.

This makes Stamos’s thread, and his candor, very intriguing. Here it is in its entirety.

  1. I appreciate Quinta’s work (especially on Rational Security) but this thread demonstrates a real gap between academics/journalists and SV.
  2. I am seeing a ton of coverage of our recent issues driven by stereotypes of our employees and attacks against fantasy, strawman tech cos.
  3. Nobody of substance at the big companies thinks of algorithms as neutral. Nobody is not aware of the risks.
  4. In fact, an understanding of the risks of machine learning (ML) drives small-c conservatism in solving some issues.
  5. For example, lots of journalists have celebrated academics who have made wild claims of how easy it is to spot fake news and propaganda.
  6. Without considering the downside of training ML systems to classify something as fake based upon ideologically biased training data.
  7. A bunch of the public research really comes down to the feedback loop of “we believe this viewpoint is being pushed by bots” -> ML
  8. So if you don’t worry about becoming the Ministry of Truth with ML systems trained on your personal biases, then it’s easy!
  9. Likewise all the stories about “The Algorithm”. In any situation where millions/billions/tens of Bs of items need to be sorted, need algos
  10. My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.
  11. And to be careful of their own biases when making leaps of judgment between facts.
  12. If your piece ties together bad guys abusing platforms, algorithms and the Manifestbro into one grand theory of SV, then you might be biased
  13. If your piece assumes that a problem hasn’t been addressed because everybody at these companies is a nerd, you are incorrect.
  14. If you call for less speech by the people you dislike but also complain when the people you like are censored, be careful. Really common.
  15. If you call for some type of speech to be controlled, then think long and hard of how those rules/systems can be abused both here and abroad
  16. Likewise if your call for data to be protected from governments is based upon who the person being protected is.
  17. A lot of people aren’t thinking hard about the world they are asking SV to build. When the gods wish to punish us they answer our prayers.
  18. Anyway, just a Saturday morning thought on how we can better discuss this. Off to Home Depot. FIN

Recode – All