Facebook vows to crackdown platform abuse, CEO Mark Zuckerberg finally opens up about Cambridge Analytica data misuse

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Facebook CEO Mark Zuckerberg has finally opened up about the on-going Cambridge Analytica that is said to have the possession of Facebook user data that was improperly obtained which was a breach of Facebook’s trust. The company has vowed to crack down on platform abuse and it has announced future steps and plan of action. The company says that it will take action on potential past abuse and put stronger protections in place to prevent future abuse. [HTML1] Facebook says that it had reviewed and investigated all the apps that had access to large amounts of information before it changed the platform in 2014 to reduce data access, and now Facebook will conduct a full audit of any app with suspicious activity and will ban anything that is misusing the information. Once it bans the suspicious apps, it will now let users know about the same and that their data has been misused. It also said that it will turn off apps that the user hasn’t used in the last three months and will turn off the app’s access to their information. Facebook will also change the login system and in the next version, it will reduce the data that an app can request without app review …
Fone Arena
Cash For Apps: Make money with android app

Here’s Why That Recent Abuse of Facebook Data Matters

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Your data is probably being used without your consent.

This is, I hope, not a shock to you. In fact, it’s one of the biggest takeaways of the recent investigation involving Facebook and a data mining company called Cambridge Analytica.

The short version: Cambridge Analytica used a quiz app to scrape data such as users’ identities, their friend networks, and likes from millions of Facebook users. Users inadvertantly gave consent by agreeing to the user conditions in the app. The company later used that data to build targeted political ads for Donald Trump’s political campaign, the New York Times, which conducted the investigation along with The Observer, reports.

But who’s to blame for such a massive breach of user privacy? Yes, it’s easy to point a well-deserved finger at Cambridge Analytica, and another at Facebook. But it’s too neat to pin it on those two when the problem is so much larger and insidious.

Why is Cambridge Analytica allowed to scrape this data?

These seem like simple questions, but they’re really not.

To the first point: Well, the Facebook users that downloaded the data-scraping app, thisisyourdigitallife, did technically consent to having their data scraped.

We all do this. When we download a new app or sign up for a social media site, we never read the user agreements. They’re boring and we’re impatient. But the truth is they have some pretty important information in them. We check “yes” on Terms of Service agreements, even though we know in the back of our minds the fine print might include an article that the network plans to sell our information. We could all stand to be more discerning before downloading apps and filling out quizzes.

Even more invasive was that Facebook’s terms of service allowed apps to access friends’ Facebook data as well as the user’s own (this was the case in 2014 and, Facebook has stated, has since changed). That means that any app that was using Facebook at the time could have accessed as much data as Cambridge Analytica did, though it’s not yet clear if other apps did so.

Facebook executives have claimed “everyone involved gave their consent,” as said by vice president and deputy general counsel Paul Grewal. But that’s patently false. Consent, by nature, has to be informed for people to actually give it.

What in the world is Cambridge Analytica doing with this data?

That is, how worried should you be?

Cambridge Analytica says they used this information to create profiling tools, which were then used to target political ads at users’ personality traits. In doing so, many news reports have suggested, the company helped put Donald Trump into office.

It’s still unclear if that sort of targeting has much influence. Research suggests that it does not. As The Verge put it, while misuse of data is a no-no, suggesting that Facebook likes are enough leverage to influence an election is “almost certainly overstating Cambridge Analytica’s power.”

Even so, the sort of data that the app was collecting is used by lots of other third parties. It’s incredibly valuable to advertisers, who can exploit users’ information to target their marketing down to the individual level. All of this high-tech data collection is in the name of a timeless goal: to get consumers to buy a product.

Can’t the government stop this?

Discomfited by the seemingly unlimited power social media has over our information, lawmakers globally have pushed for governments to step in. In a recent review, the New York Times found that, in the past five years, more than 50 countries have passed laws that better regulate how people use, and are protected from, websites.

Most notably, in May, the European Union (EU) will put new regulations into effect that will ensure users understand when their data is being collected. The General Data Protection Regulation, or GDPR, requires that companies identify what data they are collecting and why it’s being collected, and allow consumers to access and control that data. The legislation applies to social media networks, which must comply.

The United States remains one of the few of the world’s leading countries that has no such legislation in place. Last year, Congress overturned a law that would have prevented internet service providers (ISPs) from selling data without users’ consent. Legislators have resisted taking any action against Silicon Valley; broadly, opponents of such regulation assert that these regulations stifle innovation.

But that doesn’t mean that Americans are going to be the only people in the world with their data up for grabs. As the EU’s new laws go into effect, Facebook is launching a new “privacy dashboard” that will help users worldwide exert better control over their privacy settings. Other companies are altering, or even shutting down, their social media advertising and data businesses internationally in response to the GDPR, Wired reports, simply because it’s too difficult to tailor services to the countries with more restrictive laws.

Cambridge Analytica may be the most recent company found to be taking advantage of the data users unwittingly hand over to social media sites, but it likely won’t be the last. There’s no telling how many apps have been doing the same thing. Their roles may come out in the future, or they could remain a secret.

GDPR shows the power that legislation can have in reining in the moneymaking schemes of social media. If similar laws were passed outside the EU, you wouldn’t have to worry as much about clicking “I agree” to any terms of service you damn well please.

The post Here’s Why That Recent Abuse of Facebook Data Matters appeared first on Futurism.


Cash For Apps: Make money with android app

Cambridge Analytica’s Facebook data abuse shouldn’t get credit for Trump

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

‘I think Cambridge Analytica is a better marketing company than a targeting company’

Continue reading…

The Verge – All Posts

Cash For Apps: Make money with android app

OpenX Cracks Down on Ad Unit Abuse

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

OpenX, a leading independent advertising technology provider, announced Thursday that the company will ban from its exchange a group of video ad formats, including the 300×250 — one of the most prolific video ad units, that rank among the worst offenders of providing bad ad environments for consumers and advertisers in the industry today.

The move marks another industry-first initiative by the company to ensure the highest quality marketplace for brands, publishers and consumers.

300×250 video ads are a prime example of ads that provide a poor user experience. The 300×250 size does not match any standard video ad size and consists almost entirely of in-banner video (IBV), a video ad that is “stuffed” into a banner ad but sold as in-stream video inventory or mislabeled outstream units.

In a joint study with Pixalate, the data platform that offers a comprehensive suite of products that bring transparency to programmatic advertising, OpenX confirmed that the 300×250 video unit accounts for over 30 percent of all video sold programmatically today. The study also found that this particular video ad unit had Invalid Traffic (IVT) rates nearly a third higher than the average of all other programmatic video ad units sold today.

“Video is a rapidly growing part of the programmatic ecosystem, and as the medium matures, the industry needs to constantly stay ahead of format variations to ensure brands, publishers and consumers experience the highest quality video engagement,” said John Murphy, head of marketplace quality, OpenX. “Quality has always been a priority at OpenX, and this step confirms our conclusion that this ad unit has no place in any advertising exchange that values quality. Put simply, it is an ad unit that should be stopped in its tracks.”

According to a separate OpenX performance assessment of 300×250 video ads, the company also found that this particular ad size is 80 percent less viewable compared to all other video ad sizes and they are 98 percent less likely to be completed while visible and audible.

“Quality is a choice. Whether it is choosing to work only with TAG certified companies, or limiting ad buys to ads.txt approved partners, or, as in the case of video, choosing to work with partners that will put the interests of the entire ecosystem above short-term gain, we must expect every player in digital advertising to make quality and value central pillars of their business,” said Jason Fairchild, co-founder of OpenX.

The post OpenX Cracks Down on Ad Unit Abuse appeared first on Mobile Marketing Watch.

Mobile Marketing Watch

Cash For Apps: Make money with android app

Researchers discover new ways to abuse Meltdown and Spectre flaws

Intel has already started looking for other Spectre-like flaws, but it won't be able to move on from the Spectre/Meltdown CPU vulnerabilities anytime soon. A team of security researchers from NVIDIA and Princeton University have discovered new ways t…
Engadget RSS Feed

Phil Libin, the co-founder of Evernote, is backing an AI chat bot to help people report workplace abuse

Three in four workplace harassment incidents go unreported.

Three in four workplace harassment incidents go unreported. That’s why Phil Libin, the co-founder of Evernote, is backing a startup that aims to make it easier for people to report inappropriate behavior.

But since this is Silicon Valley, the startup is creating an app powered by artificial intelligence. The chat bot will prompt people about workplace incidents and record their responses, almost like a diary.

Called Spot, the app launched on Tuesday and it asks users to recount their experiences of harassment and discrimination in the workplace. Co-founders Dr. Julia Shaw, Dylan Marriott, and Dr. Daniel Nicolae think their tool can encourage people to report their experiences more quickly and accurately than they would talking to someone in the HR department.

“If you’re reporting harassment and talking to a human being, there might be bias involved, they may ask leading questions, and you might not trust they’re not going to be sharing that information with something else,” said Dr. Shaw, who holds a PhD from psychology from the University of British Columbia and in addition to her work at spot is a research associate at University College London.

She led a team of researchers to design Spot to ask open-ended, neutral questions in the “cognitive interview” style originally developed to help police ask better questions during investigations.

If an employee experiences unwanted behavior in the workplace they can go to Spot’s website and talk to a chatbot with a text-message style interface, which will ask a series of questions about what happened. The tool uses natural language processing to ask follow up questions about specific people and places mentioned. Spot then compiles the results in a time-stamped, encrypted PDF report that users can download and send to themselves or to their employers.

People can choose to submit their complaints anonymously, which raises the question of how companies will be able to authenticate claims. Libin says it’s better to have a few unverifiable reports rather than none.

“My first reaction to this as a middle-aged white-male CEO guy, was ‘Ugh, this is going to lead to over-reporting, spam’— that’s a common knee-jerk reaction because discrimination and harassment isn’t in their daily awareness,” Libin said. But the real problem, according to the entrepreneur, is underreporting, not the relatively small percentage of false reports.

Companies in the U.S. have a legal obligation to investigate harassment and discrimination claims brought to their attention. Companies have less liability, however, to look into anonymous claims that are unverifiable, according to Spot’s legal advisor, Paul Livingston, who is a practicing lawyer in the UK. But if numerous anonymous claims are launched against one person, the company employing that person could be held legally liable to investigate, he said.

While Spot is free right now, eventually the team plans charge employers for access to their reports. Spot has yet to build the backend data platform that would allow that access.

Libin’s firm, All Turtles, which provides funding and resources such as engineers and working spaces to entrepreneurs, is putting $ 500,000 in cash to Spot at this time. It’s also dedicating resources from All Turtle’s budget, such as legal experts, which Libin values at $ 500,000.

Spot wouldn’t be the first technical tool to try to help victims of mistreatment. Callisto , a nonprofit organization that built self-reporting tool for sexual harassment at universities, launched two and a half years ago. Convercent, which makes ethics and compliance management software for businesses, introduced a texting bot for sexual harassment in October. And STOPit app partners with schools and businesses to gather anonymous complaints.

Spot is the only one to focus solely on workplace harassment, for now, and it aims to focus on its AI as a better way to elicit complaints.

“We really believe in the ability in the ability of tech to make people better humans.” said Libin.

Recode – All

What Amazon’s Abuse of Power Foreshadows for 2018

Given how many big names have fallen over the last few weeks due to sexual misconduct, abuse and harassment, you’d think I’d name 2017 as the year of power abuse. However, while I know a lot of folks think the issue is dying down, I don’t see that at all. There are entire industries that have yet to be hit by this, and Congress hasn’t even finished cleaning house or putting in place rules to prevent this activity. Last week I pointed out how Google was abusing its power in holding Amazon Echo Show customers hostage to force Amazon to sell products it didn’t want to sell.

YouTube pulls autocomplete results that showed child abuse terms

YouTube has been working hard lately to fix issues around child exploitation and abuse. The Google-owned video service revamped its policies and their enforcement around videos featuring minors or family-friendly characters in disturbing situations….
Engadget RSS Feed

Twitter releases its calendar of upcoming measures to combat harassment and abuse

Twitter this afternoon publicly posted its schedule for instituting fixes and changes to longstanding abuse and harassment issues that have plagued the social network for years. The calendar, first disclosed earlier this week in an internal Twitter email obtained by Wired, details nearly two dozen changes stretching from October 27th to January 10th. They focus on a broad range of topics, from non-consensual nudity to hateful imagery and violent rhetoric to more transparency around account suspensions.

Some measures include more proactively banning content on the platform that glorifies or condones violence, instead of simply drawing the line at actual threats of violence. The company will also suspend accounts of organizations that…

Continue reading…

The Verge – All Posts