Spotify, Apple Music responsible for both rebound of music industry and dying physical media sales

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Article Image

Recorded music revenue jumped by double digits last year, thanks to revenue growth from Apple, Spotify and other streaming services, according to a new report from the Recording Industry Association of America (RIAA), the music industry’s lobbying group
AppleInsider – Frontpage News

Cash For Apps: Make money with android app

Twitter has suspended a number of accounts responsible for ‘tweetdecking’

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Last month, Twitter announced a number of new rules on how users and apps can automate tweets in an effort to cut down on spam and bots that spread propaganda. The company says that users using multiple accounts can “amplify or inflate the prominence of certain tweets,” and according to BuzzFeed, it has just banned a number of accounts that were known for mass-retweeting or for copying and stealing tweets from other users.

BuzzFeed says that a number of accounts — such as @dory, @girlposts, and @ginah, some with “with hundreds of thousands or even millions of followers” — violated the company’s new spam policies and have been suspended. A Twitter spokesperson pointed The Verge to new rules that the company rolled out in a broader effort…

Continue reading…

The Verge – All Posts

Cash For Apps: Make money with android app

Renault’s EZ-GO robot taxi is the most socially responsible concept in Geneva

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Not every concept car has to be like the Lamborghini Egoista

Continue reading…

The Verge – All Posts

Cash For Apps: Make money with android app

Who Is Responsible When a Self-Driving Car Has an Accident?

Collision Ethics

Two separate incidents in California involving self-driving vehicles have recently gotten attention. One accident involved a Tesla Model S, the other, a Chevrolet Bolt that was using General Motors’ Cruise Automation technology. In both cases, the vehicles were reportedly using their respective autonomous driving systems.

Culver City fire service officials reported on January 22 that the Model S “plowed into the rear” of one of their fire trucks on a freeway. Supposedly, the Tesla was travelling at 105 km per hour (65 mph) in Autopilot mode when it hit the fire truck, the firefighters reported in a tweet.

More than a month early in December 2017, a Chevy Bolt that was driving autonomously collided with a motorcycle as the car was changing lanes. According to the incident report filed by GM to California’s Department of Motor Vehicle, the Bolt “glanced the side” of the motorcycle. The injured cyclist filed a case against the American carmaker, the first lawsuit involving a self-driving car. The incident was only recently made public.

Though these are hardly the first car crashes involving self-driving cars, these incidents raise a question that’s been asked many times before: Who is the responsible party? The driver, or the automakers that designed the autonomous driving technology?

Hitting the Brakes

When it comes to typical vehicular accidents, determining which party is at fault is already challenging. That challenge only grows when vehicles running on autonomous systems are introduced. To be clear, however, many of the vehicles dubbed “self-driving” have not achieved full autonomy. Most still rely on certain input from a driver behind the wheel.

Alain L. Kornhauser, director of the Transportation Program at Princeton University and Princeton Autonomous Vehicle Engineering (PAVE) research group chair, thinks that in the case of these two recent crashes, the drivers share part of the blame. He told Futurism, however, that “Automated Emergency Braking [AEB] should be redesigned to work.” If a car equipped with AEB senses an impending collision and the driver does not react in time, the car will start braking on its own. According to Consumer Reports, Tesla, Subaru and Infiniti owners are the most satisfied with their vehicles’ AEB systems.

Kornhauser explained that the National Highway Traffic Safety Administration’s (NHTSA) attitude towards emergency systems in self-driving cars is faulty, favoring what he calls “crash mitigation.” Often, the AEB system doesn’t kick-in until a driver actually touches the brakes. “The ‘NHTSA mentality’ should be to avoid crashes, not just mitigate them,” Kornhauser said. “What needs to be done is that this kind of design mentality must change.”

He added: “This is why we need to perfect AEB, or what I call “safe-driving cars,” before we go all out with letting people take their hands and feet off the controls even for a little while [or] at certain times.” Most states still require a driver to be behind the wheel during autonomous vehicle testing.

Image credit: General Motors
Image credit: General Motors.

For truly driverless cars to successfully hit the streets, Kornhauser explained that the safe-driving aspects essentially have to be perfect. “These systems don’t have a fall guy” in the case of an accident, Kornhauser said. So producers and fleet owners won’t sell or deploy fully autonomous vehicles until these designs are even further refined.

Both Tesla and GM aren’t strangers to car crashes involving vehicles with self-driving systems. The state of California, which seems to have taken a friendlier stance on testing autonomous cars, has seen over 30 accidents involving self-driving vehicles since 2014.
Many believe that these crashes should in no way impede further research into self-driving cars. Rather, such crashes should continue to inform how these vehicles are designed and developed, with the ultimate goal of saving more lives in the long run.

The post Who Is Responsible When a Self-Driving Car Has an Accident? appeared first on Futurism.

Futurism

Giving teens alcohol to teach them responsible drinking may backfire

It’s common to hear about parents giving their teens alcohol, hoping that if they learn about responsible drinking at home they’ll be less likely to binge drink when they’re on their own. But a new study suggests that this method doesn’t seem to protect teens from the risks of alcohol abuse.

Australia scientists followed 2,000 teens for six years and found that parents providing alcohol not only doesn’t prevent binge drinking, it was actually linked to teens finding alcohol through other sources. The study, the first to analyze the long-term effect of parents providing alcohol, was published this week in the journal Lancet Public Health.

Every year for six years, teens and their parents filled out different surveys about alcohol…

Continue reading…

The Verge – All Posts

Study shows TV ads are responsible for your growing waistline


Surprise! Watching TV is bad for you. But, not for the reasons you might think. Okay, well, those too… but not only for the reasons you think. A recent study by Cancer Research UK revealed those watching more than three hours of television a day were more likely to eat hundreds of additional snacks per year, the kind of snacks known for making you fat. Interestingly enough, though, it wasn’t the sedentary behavior, per se, that did the average TV watcher’s waistline in; it was the advertisements. Based on a YouGov survey, the organization questioned 3,348 young people between the…

This story continues at The Next Web
The Next Web

Google pauses accessibility app ban while it considers ‘responsible and innovative’ uses of the accessibility services

Google started contacting developers last month with a bit of an ultimatum: make sure accessibility mode is only used to help disabled users or risk being banned from the Play Store. Android’s accessibility services have been used for a great many things over the years, so this change in policy caught many developers off guard. However, Google is now notifying developers that it’s pausing the ban as it looks into how apps can make “responsible and innovative” use of accessibility features.

Read More

Google pauses accessibility app ban while it considers ‘responsible and innovative’ uses of the accessibility services was written by the awesome team at Android Police.

Android Police – Android News, Apps, Games, Phones, Tablets

Quantum “Flashes” Could Be Responsible for the Creation of Gravity

Quantum Leap

Since the mid-twentieth century, two theories of physics have offered powerful yet incompatible models of the physical universe. General relativity brings space and time together into the (then) portmanteau space-time, the curvature of which is gravity. It works really well on large scales, such as interplanetary or interstellar space.

The Evolution of Human Understanding of the Universe [INFOGRAPHIC]
Click to View Full Infographic

But zoom into the subatomic, and things get weird. The mere act of observing interactions changes the behavior of what is (presumably) totally independent of observation. In those situations, we need quantum theory to help us make sense of it all.

Though scientists have made some remarkable attempts to bring these estranged theories together, viz., string theory, the math behind the theories remains incompatible. However, new research from Antoine Tilloy of the Max Planck Institute of Quantum Optics in Garching, Germany, suggests that gravity might be an attribute of random fluctuations on the quantum level, which would supplant gravity as the more fundamental theory and put us on the path to a unified theory of the physical universe.

Tilloy’s Model

In quantum theory, a particle’s state is described by its wave function. This function allows theorists to predict the probability that a particle will be in this or that place. However, before the act of verification is made via measurement, no one knows for sure where the particle will be, or if it even exists. In scientific terms, the act of observation “collapses” the wave function.

Here’s the thing about quantum mechanics: it doesn’t define what a measurement is. Who — or what — is an observer? A conscious human? Bracketing all explanations to observed phenomena, we’re stuck with paradoxes like Schrödinger’s cat, which invites us to consider the equal possibilities that a previously boxed cat is, as far as we know, simultaneously dead and alive in the box, and will remain as such until we lift the lid.

One attempt to solve the paradox is the Ghirardi–Rimini–Weber (GRW) model from the late eighties. It incorporates random “flashes” that can cause the wave functions in quantum systems to spontaneously collapse. This purports to leave the outcome unbesmirched by meddling human observation.

Tilloy meddled with this model to extend quantum theory to encompass gravity. When a flash collapses a wave function, and the particle reaches its final position, a gravitational field pops into existence at that precise moment in space-time. On a large enough scale, quantum systems have many particles going through innumerable flashes.

According to Tilloy’s theory, this creates a fluctuating gravitational field, and the gravitational field produced by the average of these fluctuations is compatible with Newton’s theory of gravity. If gravity comes from quantum processes, but nevertheless behaves in a classical (or Newtonian) way, what we have is a “semiclassical” theory.

However, Klaus Hornberger of the University of Duisberg-Essen in Germany cautions the scientific world that other problems must be tackled before Tilloy’s semiclassical solution can warrant serious consideration as a unifying theory of fundamental forces underlying all modern physical laws. It fits Newton’s theory of gravity, but Tilloy’s yet to work out the math to show that the quantum theory also describes gravity under Einstein’s theory of general relativity.

With the greatest explanatory power, physics is one of the most exciting scientific disciplines. But the key to unified theories in physics is patience. As with Schrödinger’s cat, the will-to-know alone cannot fill in the gaps of what we simply don’t yet know.

The post Quantum “Flashes” Could Be Responsible for the Creation of Gravity appeared first on Futurism.

Futurism

Are Engineers Responsible for the Consequences of Their Algorithms?

Reading Faces

It’s become a custom for some protesters to cover their faces during public demonstrations. Now, it seems, technology could outwit them: a team of engineers has created an algorithm that can identify faces that are partially covered.

The algorithm identifies faces using angles at 14 different points on the face, according to a paper published on the preprint server arXiv to be presented at the IEEE International Conference on Computer Vision Workshops in October. The researchers trained and validated the algorithm, which relies on a form of artificial intelligence called deep learning, using a dataset of 1500 images of 25 human faces. Each face was partially obscured by one or more of ten disguises (such as sunglasses, a face scarf, or a hat) and eight complex backgrounds to simulate real-world photos. When they tested the algorithm on a new set of 500 photos, it accurately identified people wearing hats and scarves 69 percent of the time—far more accurate than comparable techniques currently in use.

Sample images of faces with disguises and varying backgrounds used by the researchers. Singh et al, 2017 https://arxiv.org/pdf/1708.09317.pdf
Sample images of faces with disguises and varying backgrounds used by the researchers. Image Credit: Singh et al, 2017

The researchers created the algorithm with the intention of unmasking criminals, one of the study authors told Motherboard. And it’s not the first technique to be created with that intention—there are already algorithms that can identify people based on their hair or clothes, or the way they walk.

But Twitter users saw a more insidious use for this algorithm: Groups in power could use it to unmask and persecute those who oppose them.

Whose job is it, then, to ensure that an algorithm like this is used only for good?

Don’t Be Evil

Algorithms already control a stunning amount of our lives—the information we see, the jobs we get, how much defendants should pay for bail. That’s unlikely to change as technology is increasingly integrated into the systems around us. And though they are supposed to help us make decisions without our fallible human subjectivity, algorithms often end up perpetuating our preexisting biases. Algorithms that function as black boxes have the potential to leave those biases unchecked, quietly altering how the world operates. “There’s the obvious potential that, if there’s a lack of transparency around an algorithm, it could perpetuate discrimination or stereotypes,” says David Ryan Polgar, a tech ethicist based in Connecticut, in an interview with Futurism.

Because engineers usually follow the instructions of companies, it’s hard to hold them responsible for the consequences of the tools they create, Polgar says. “Engineers by their very nature are trying to solve a direct problem,” he says. “If someone tells me to build a sharper knife, I am closing my blinders and saying OK I’ll build a sharper knife. My objective is not to think of all the possible ways the knife could be misused.” That responsibility falls on the individuals in the company who decided the tool was worth having in the first place, he adds.

No technological advance is free of risk. In other industries, such as medicine, the government vets and tracks the tools that are most susceptible to being abused. But the government has not been nimble enough to make rules about how algorithms should be used, Polgar points out.

As such, it’s been up to companies to make sure the riskiest things don’t see the light of day, Polgar says. That can be dangerous because companies, governed by a small group of leaders and the whims of the market, don’t always make decisions that the general population would agree with. Companies that have gotten it wrong have dealt with substantial blowback — Facebook took heat for automatically censoring an iconic photo from the Vietnam war, and Google was forced to act quickly after its software identified two black people in a photo as “gorillas.”

It’s impossible to know if these situations could have been avoided if the algorithms behind the gaffes were subject to input from a more diverse group, or if they were more transparent to the general public. But it probably wouldn’t have hurt.

From Reaction To Prevention

As the design of these algorithms has gotten more attention, companies may begin to put more emphasis on those human needs before the public pushes back—a preventative approach to PR fiascos instead of a reactive one. Experts have called for tech companies to employ ethicists, or equal numbers of developers and liberal arts majors; other experts have emphasized that computer science majors should receive training in ethics. In 2015, Elon Musk and Sam Altman unveiled the nonprofit OpenAI, dedicated to transparency and safety for artificial intelligence.

That attention on ethics could prove essential as companies like Amazon, Google, Microsoft, Apple, and Facebook become increasingly powerful, quashing or absorbing competitors. And if customers don’t like what the companies are doing, it’s increasingly difficult to opt out. “Traditionally we would say speak with your wallet, but I don’t think works the same way now,” Polgar says.

“Creators will have blinders on to solve problems. But also responsible companies, tied in with corporate social responsibility, should put in reasonable stopgaps to ensure the likelihood that there is not a dramatic amount of misuse of the product,” he adds.

As for the facial recognition algorithm, Polgar says it touched a bit of a nerve because it cropped up at a moment of “necessary and pivotal protest” in the US. Regardless, the creators would have done well to include some caveats as to how to avoid misuse or abuse.

The algorithm still has a number of limitations. It can’t identify people wearing hard masks, like the Guy Fawkes mask often worn by members of the hacking collective Anonymous. And it’s still not accurate enough to warrant widespread use, as Inverse notes. To improve it, the researchers plan to test their algorithm on real-world scenarios.

As the algorithm advances, its engineers aren’t sure how to prevent their creation from being used for nefarious purposes in the future. But they are sure of their own intentions.

“I actually don’t have a good answer for how that can be stopped,” study co-author Amarjot Singh told Motherboard. “It has to be regulated somehow … it should only be used for people who want to use it for good stuff.”

The post Are Engineers Responsible for the Consequences of Their Algorithms? appeared first on Futurism.

Futurism