OpenX, a leading independent advertising technology provider, announced Thursday that the company will ban from its exchange a group of video ad formats, including the 300×250 — one of the most prolific video ad units, that rank among the worst offenders of providing bad ad environments for consumers and advertisers in the industry today.
The move marks another industry-first initiative by the company to ensure the highest quality marketplace for brands, publishers and consumers.
300×250 video ads are a prime example of ads that provide a poor user experience. The 300×250 size does not match any standard video ad size and consists almost entirely of in-banner video (IBV), a video ad that is “stuffed” into a banner ad but sold as in-stream video inventory or mislabeled outstream units.
In a joint study with Pixalate, the data platform that offers a comprehensive suite of products that bring transparency to programmatic advertising, OpenX confirmed that the 300×250 video unit accounts for over 30 percent of all video sold programmatically today. The study also found that this particular video ad unit had Invalid Traffic (IVT) rates nearly a third higher than the average of all other programmatic video ad units sold today.
“Video is a rapidly growing part of the programmatic ecosystem, and as the medium matures, the industry needs to constantly stay ahead of format variations to ensure brands, publishers and consumers experience the highest quality video engagement,” said John Murphy, head of marketplace quality, OpenX. “Quality has always been a priority at OpenX, and this step confirms our conclusion that this ad unit has no place in any advertising exchange that values quality. Put simply, it is an ad unit that should be stopped in its tracks.”
According to a separate OpenX performance assessment of 300×250 video ads, the company also found that this particular ad size is 80 percent less viewable compared to all other video ad sizes and they are 98 percent less likely to be completed while visible and audible.
“Quality is a choice. Whether it is choosing to work only with TAG certified companies, or limiting ad buys to ads.txt approved partners, or, as in the case of video, choosing to work with partners that will put the interests of the entire ecosystem above short-term gain, we must expect every player in digital advertising to make quality and value central pillars of their business,” said Jason Fairchild, co-founder of OpenX.
Intel has already started looking for other Spectre-like flaws, but it won't be able to move on from the Spectre/Meltdown CPU vulnerabilities anytime soon. A team of security researchers from NVIDIA and Princeton University have discovered new ways t…
Engadget RSS Feed
Three in four workplace harassment incidents go unreported.
Three in four workplace harassment incidents go unreported. That’s why Phil Libin, the co-founder of Evernote, is backing a startup that aims to make it easier for people to report inappropriate behavior.
But since this is Silicon Valley, the startup is creating an app powered by artificial intelligence. The chat bot will prompt people about workplace incidents and record their responses, almost like a diary.
Called Spot, the app launched on Tuesday and it asks users to recount their experiences of harassment and discrimination in the workplace. Co-founders Dr. Julia Shaw, Dylan Marriott, and Dr. Daniel Nicolae think their tool can encourage people to report their experiences more quickly and accurately than they would talking to someone in the HR department.
“If you’re reporting harassment and talking to a human being, there might be bias involved, they may ask leading questions, and you might not trust they’re not going to be sharing that information with something else,” said Dr. Shaw, who holds a PhD from psychology from the University of British Columbia and in addition to her work at spot is a research associate at University College London.
She led a team of researchers to design Spot to ask open-ended, neutral questions in the “cognitive interview” style originally developed to help police ask better questions during investigations.
If an employee experiences unwanted behavior in the workplace they can go to Spot’s website and talk to a chatbot with a text-message style interface, which will ask a series of questions about what happened. The tool uses natural language processing to ask follow up questions about specific people and places mentioned. Spot then compiles the results in a time-stamped, encrypted PDF report that users can download and send to themselves or to their employers.
People can choose to submit their complaints anonymously, which raises the question of how companies will be able to authenticate claims. Libin says it’s better to have a few unverifiable reports rather than none.
“My first reaction to this as a middle-aged white-male CEO guy, was ‘Ugh, this is going to lead to over-reporting, spam’— that’s a common knee-jerk reaction because discrimination and harassment isn’t in their daily awareness,” Libin said. But the real problem, according to the entrepreneur, is underreporting, not the relatively small percentage of false reports.
Companies in the U.S. have a legal obligation to investigate harassment and discrimination claims brought to their attention. Companies have less liability, however, to look into anonymous claims that are unverifiable, according to Spot’s legal advisor, Paul Livingston, who is a practicing lawyer in the UK. But if numerous anonymous claims are launched against one person, the company employing that person could be held legally liable to investigate, he said.
While Spot is free right now, eventually the team plans charge employers for access to their reports. Spot has yet to build the backend data platform that would allow that access.
Libin’s firm, All Turtles, which provides funding and resources such as engineers and working spaces to entrepreneurs, is putting $ 500,000 in cash to Spot at this time. It’s also dedicating resources from All Turtle’s budget, such as legal experts, which Libin values at $ 500,000.
Spot wouldn’t be the first technical tool to try to help victims of mistreatment. Callisto , a nonprofit organization that built self-reporting tool for sexual harassment at universities, launched two and a half years ago. Convercent, which makes ethics and compliance management software for businesses, introduced a texting bot for sexual harassment in October. And STOPit app partners with schools and businesses to gather anonymous complaints.
Spot is the only one to focus solely on workplace harassment, for now, and it aims to focus on its AI as a better way to elicit complaints.
“We really believe in the ability in the ability of tech to make people better humans.” said Libin.
Given how many big names have fallen over the last few weeks due to sexual misconduct, abuse and harassment, you’d think I’d name 2017 as the year of power abuse. However, while I know a lot of folks think the issue is dying down, I don’t see that at all. There are entire industries that have yet to be hit by this, and Congress hasn’t even finished cleaning house or putting in place rules to prevent this activity. Last week I pointed out how Google was abusing its power in holding Amazon Echo Show customers hostage to force Amazon to sell products it didn’t want to sell.
YouTube has been working hard lately to fix issues around child exploitation and abuse. The Google-owned video service revamped its policies and their enforcement around videos featuring minors or family-friendly characters in disturbing situations….
Engadget RSS Feed
Twitter this afternoon publicly posted its schedule for instituting fixes and changes to longstanding abuse and harassment issues that have plagued the social network for years. The calendar, first disclosed earlier this week in an internal Twitter email obtained by Wired, details nearly two dozen changes stretching from October 27th to January 10th. They focus on a broad range of topics, from non-consensual nudity to hateful imagery and violent rhetoric to more transparency around account suspensions.
Some measures include more proactively banning content on the platform that glorifies or condones violence, instead of simply drawing the line at actual threats of violence. The company will also suspend accounts of organizations that…
Twitter plans to do a better job of responding to users’ reports of abuse by “investing heavily” in improving its review process, according to an internal email leaked by Wired. The company also plans to toughen its rules around violence, hate speech, and abuse in a new attempt to make its platform safer for users. The leaked email doesn’t divulge final rules or full explanations (the phrase “more details to come” appears three times), but it offers the gist of what Twitter intends to do.
A lot of what’s happening here is Twitter broadening existing rules so that hate or abuse that previously slipped by might now be banned. Twitter says it will now ban tweets that “glorify violence,” instead of only banning tweets that make or promote…