Mark Zuckerberg: We didn’t do enough to keep users safe

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

In light of the news that Facebook has rewritten its data policy, and that Cambridge Analytica may have had up to 87 million users' data, founder and CEO Mark Zuckerberg hosted a call with the media to discuss the company's efforts to better protect…
Engadget RSS Feed
Cash For Apps: Make money with android app

VPNSecure Delivers a Lifetime of Safe Online Browsing for Under $35 [Sponsored Deal]

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Note: The following post was written by our sponsor, StackSocial.

There’s no question that in 2018 online private data can be easily intercepted by hackers. Case in point, the hacking group that recently hit Atlanta with a massive ransomware attack and is now threatening to wipe its government data clean, lest they pay a hefty sum.

Combine this with the fact that net neutrality is currently being fought over in court, and it’s clear that users should do their part to make sure they’re browsing the web as safely as they possibly can.

Read More

VPNSecure Delivers a Lifetime of Safe Online Browsing for Under $ 35 [Sponsored Deal] was written by the awesome team at Android Police.

Android Police – Android news, reviews, apps, games, phones, tablets

Cash For Apps: Make money with android app

Disney researches safe human-robot interactions

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

human-robot intereactions - disney research

Disney has published new research in human-robot interaction. Its focus is to explore how to make robots move in a way that encourages human beings to cooperate with them, and develop better human-machine rapport.

Part of human nature is to make snap judgements based on aesthetics. It’s a trait with obvious evolutionary benefits: determining threats before they get too close aids any species’ survival. This is why robot posture and movement will be increasingly significant for their future development and acceptance in the workplace.

In short, nobody wants to work alongside, or interact with, machines that look intimidating or dangerous. This is why many humanoid robots, such as Aldebaran/SoftBank’s NAO and Pepper machines, are designed to be small, non-threatening, and almost childlike. The manufacturers’ aim in those cases is to make human beings protective of robots, particularly ones that appear to express emotions.

The same principle applies to all types of robots, including ‘cobot’ machines that are designed to work alongside people in factories or warehouses, and even in hospitals or restaurants.

Read more: Flipping hell! Flippy the burger-bot too fast for human workers

Reading robot body language

Disney’s new research explores human-to-robot handovers by varying the robot’s body language. The team wanted to examine how changes in a robot’s behaviour, posture, and movement, would influence human participation.

The study found that the robot’s initial pose made a difference to the fluency and efficiency of the overall handover of a task, either from machine to human, or vice versa. The speed of the robot and its grasping method also had an impact.

According to the researchers, “This effect may occur by changing the giver’s perception of object safety and hence their release timing. Alternatively, it may stem from unnatural or mismatched robot movements.”

The key to more fluent interactions? Making the robot predictable, says Disney. Participants in the study also reported less discomfort and more “emotional warmth” as they became more familiar with the robot’s behaviour.

“We find these results exciting, as we believe a robot can become a trusted partner in collaborative tasks,” wrote the researchers, Matthew Pan and Elizabeth Croft of the University Of British Columbia and Monash University respectively.

Read more: DHL US trials robots, AI, AR & crowdsourcing to beat Amazon

Building rapport with robots

The most obvious conclusion of the study is that it’s possible for humans to build a kind of rapport with their robotic counterparts. While it may seem superficial, any effective partnership depends on both parties being comfortable with the arrangement. This is particularly the case when performing manual tasks.

Research such as this could lay the foundations for best practices in robot design and manufacturing. Making robots ‘affable; and predictable is a big step toward convincing human workers that lending an artificial hand is something to be embraced, not intimidated by.

Internet of Business says

The need for human beings to feel safe around robots, and even to enjoy working with them, will be a vital consideration in the future, as robots of every kind move into the workplace. If ‘cobots’ are to collaborate with people successfully in manual labour settings of any type, then human beings must feel valued and understood, and equally, must value and appreciate their machine colleagues.

The same principal applies to humanoid robots, and perhaps especially to ones that are designed to both understand and express or simulate human emotions.

A seminar at London’s Design Museum in January 2018, jointly presented by psychologist and writer Adriana Hamacher, and Internet of Business editor Chris Middleton, shared a number of stories about how people will form bonds with humanoid robots, and even lie to an emotion-expressing robot in order not to hurt its feelings.

Hamacher’s own research programme was reported here in 2016, and proves that emotional bonds between humans and machines can be genuine, and often surprising. Her study concluded that people often prefer working with robots that are expressive to ones that are merely efficient.

The post Disney researches safe human-robot interactions appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

Android Head Of Security Claims The Platform Is “Now As Safe As The Competition”

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Google’s 2017 Android Security report is out, and alongside it, the company’s head of Android security David Kleidermacher has claimed that Android is now “as safe as the competition” despite high profile security issues that have dogged the platform for years, including the past 12 months.

[ Continue reading this over at RedmondPie.com ]

Redmond Pie

Cash For Apps: Make money with android app

Google claims Android is “as safe as the competition” despite its outdated install base

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Article Image

Google’s head of Android security David Kleidermacher claimed in an interview that "Android is now as safe as the competition" on the release of the company’s 2017 Android Security report, which seeks to reassure users that it is doing everything it can to protect them from malware and exploits. The problem is that Google can’t secure the 2 billion Androids it claims as its platform.
AppleInsider – Frontpage News

Cash For Apps: Make money with android app

OpenAI Wants to Make Safe AI, but That May Be an Impossible Task

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

True artificial intelligence is on its way, and we aren’t ready for it. Just as our forefathers had trouble visualizing everything from the modern car to the birth of the computer, it’s difficult for most people to imagine how much truly intelligent technology could change our lives as soon as the next decade — and how much we stand to lose if AI goes out of our control.

Fortunately, there’s a league of individuals working to ensure that the birth of artificial intelligence isn’t the death of humanity. From Max Tegmark’s Future of Life Institute to the Harvard Kennedy School of Government’s Future Society, the world’s most renowned experts are joining forces to tackle one of the most disruptive technological advancements (and greatest threats) humanity will ever face.

Perhaps the most famous organization to be born from this existential threat is OpenAI. It’s backed by some of the most respected names in the industry: Elon Musk, the SpaceX billionaire who founded Open AI, but departed the board this year to avoid conflicts of interest with Tesla; Sam Altman, the president of Y Combinator; and Peter Thiel, of PayPal fame, just to name a few. If anyone has a chance at securing the future of humanity, it’s OpenAI.

But there’s a problem. When it comes to creating safe AI and regulating this technology, these great minds have little clue what they’re doing. They don’t even know where to begin.

The Dawn of a New Battle

While traveling in Dubai, I met with Michael Page, the Policy and Ethics Advisor at OpenAI. Beneath the glittering skyscrapers of the self-proclaimed “city of the future,” he told me of the uncertainty that he faces. He spoke of the questions that don’t have answers, and the fantastically high price we’ll pay if we don’t find them.

The conversation began when I asked Page about his role at OpenAI. He responded that his job is to “look at the long-term policy implications of advanced AI.” If you think that this seems a little intangible and poorly defined, you aren’t the only one. I asked Page what that means, practically speaking. He was frank in his answer: “I’m still trying to figure that out.” 

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

Page attempted to paint a better picture of the current state of affairs by noting that, since true artificial intelligence doesn’t actually exist yet, his job is a little more difficult than ordinary.

He noted that, when policy experts consider how to protect the world from AI, they are really trying to predict the future. They are trying to, as he put it, “find the failure modes … find if there are courses that we could take today that might put us in a position that we can’t get out of.” In short, these policy experts are trying to safeguard the world of tomorrow by anticipating issues and acting today. The problem is that they may be faced with an impossible task.

Page is fully aware of this uncomfortable possibility, and readily admits it. “I want to figure out what can we do today, if anything. It could be that the future is so uncertain there’s nothing we can do,” he said.

Our problems don’t stop there. It’s also possible that we’ll figure out what we need to do in order to protect ourselves from AI’s threats, and realize that we simply can’t do it. “It could be that, although we can predict the future, there’s not much we can do because the technology is too immature,” Page said.

This lack of clarity isn’t really surprising, given how young this industry is. We are still at the beginning, and so all we have are predictions and questions. Page and his colleagues are still trying to articulate the problem they’re trying to solve, figure out what skills we need to bring to the table, and what policy makers will need to be in on the game.

As such, when asked for a concrete prediction of where humanity and AI will together be in a year, or in five years, Page didn’t offer false hope: “I have no idea,” he said.

However, Page and OpenAI aren’t alone in working on finding the solutions. He therefore hopes such solutions may be forthcoming: “Hopefully, in a year, I’ll have an answer. Hopefully, in five years, there will be thousands of people thinking about this,” Page said.

Well then, perhaps it’s about time we all get our thinking caps on.

The post OpenAI Wants to Make Safe AI, but That May Be an Impossible Task appeared first on Futurism.

Futurism

Cash For Apps: Make money with android app

A New Chemical Treatment Could Make Water Safe to Drink for Months

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Scientists at Lithuania’s Kaunas University of Technology have come up with a new method to purify water and keep it clean for months.

Currently, purifying water is fairly easy. Most tourists travelling to exotic destination will keep purifying tablets in their bag along with other ordinary supplies such as plasters or malaria pills. What we haven’t quite managed to achieve is a way to make sure that the water we clean up today is going to stay drinkable tomorrow, or next week. This is because new bacteria can get in touch with it and contaminate it again, something scientists call “secondary contamination”.

The Lithuanian team sought to address just that, and early tests suggest that their method is so effective it kills off microbes for over three months. The researchers, which chose not to share  their methodology’s details but explained their results, observed that microbes did not breed in drinking water stored in the open after the application of the purification technique, and found that the purified water did not taste or smell any different from standard drinking water from the tap.

Not only is their solution particularly successful, it also operates using a very low concentration of active ingredients, in this case silver.

Silver has been used to purify water as far back as Ancient Rome. However, there are lingering worries about its potential toxicity when consumed in high concentrations, as outlined in a 2014 literature review published by the World Health Organization. In particular, silver is known to be dangerous for the liver.

Some domestic water filtration systems do use silver, but the team wants to make their technology available in liquid and tablet form, so it can be utilized in difficult circumstances like military operations.  Since these methods are designed to be mixed in with water, the concentration of active ingredients is extremely important.

The technique has now been patented, with a prototype for industrial use ready for implementation. Although the treatment is still in its early stages, the researchers believe that their method has the potential to become so cost-effective that it could soon be scaled up and employed in the bottled water industry.

The post A New Chemical Treatment Could Make Water Safe to Drink for Months appeared first on Futurism.

Futurism

Cash For Apps: Make money with android app

Why Apple needs HomePod to be as safe as houses

I’ve been using a HomePod system this week. I’m planning to write more about it, but today I wanted to discuss what everyone considering connected smart home devices should think about first: privacy and security.

Your life on view

Smart home devices communicate with each other.

They also communicate with their manufacturers, and this means significant insights can be gathered by anyone who succeeds in monitoring this informational flow.

To read this article in full, please click here

Computerworld Mobile