AI regulation and ethics: How to build human-centric AI

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

As debates rage across the world about the growing impact of AI, data analytics, and autonomous systems, Joanna Goodman was invited to sit in on an all-party Parliamentary panel of experts. This is her report.

“However autonomous our technology becomes, its impact on the world – for better or worse – will always be our responsibility.” Those are the words of Professor Fei-Fei Li, director of the Stanford Artificial Intelligence Lab, and chief scientist for AI research at Google Cloud.

Professor Li’s vision of ‘human-centred’ AI was reflected in the third evidence session of the all-party parliamentary group on AI (APPG) at the House of Lords this month. It considered ethics and accountability in the context of managing and regulating AI, as the technology moves into more and more aspects of our lives. The UK government also established an Office for AI earlier this year.

Since then, we have seen the Cambridge Analytica Facebook ‘breach’ unfold, while a driverless Uber car killed a pedestrian in Arizona, where autonomous vehicles are being tested on public roads. These and other stories – such as the problem of bias entering some AI systems – have led to more calls for ‘vigilance’ and tighter regulation.

Read more: Cambridge Analytica vs Facebook: Why AI laws are inadequate

Read more: Uber halts self-driving car tests after pedestrian is killed

The APPG considered three questions about AI and human responsibility:

• How do we make ethics part of business decision-making processes?
• How do we assign responsibility for algorithms?
• What auditing bodies can monitor the ecosystem?

Tracey Groves, founder and director of Intelligent Ethics – an organisation dedicated to optimising ethical performance in business – discussed the importance of education, empowerment, and excellence in relation to AI, and suggested the following approaches to achieving all three.

Education, empowerment, excellence

Education is about leadership development, mentoring, and coaching, she said, and about awareness training to promote the importance of ethical decision-making.

Empowerment involves building a trustworthy culture, by aligning an organisation’s values with its strategic goals and objectives, and establishing “intelligent accountability”.

Finally, achieving excellence means identifying the key performance indicators of ethical conduct and culture, she said, and then monitoring progress and actively measuring performance.

Groves highlighted inclusivity as a critical success factor in ethical decision-making, along with giving people the ability to seek redress when AI gets things wrong.

Finally, she emphasised that managing risks associated with AI software is not just the responsibility of government and regulation; all businesses need to establish ethical values that can be measured, she said. Regulation will require businesses to be accountable, she added, and – potentially – penalise them if they are not.

Building responsibility

Aldous Birchall, head of financial services AI at PwC, focused on the topic of machine learning. He advocated building responsibility into AI software, and developing common standards and sensible regulations.

Machine learning moves software to the heart of the business, he explained. AI presents exciting new opportunities, which tech companies pursue with the best intentions, but insufficient thought is given to the societal impact.

“Engineers focus on outcomes and businesses focus on decisions,” he said, adding that machine learning and AI training should include ethics and a clear understanding of how algorithms impact society.

Some companies may appoint an ethics committee, he said, while others may introduce new designations or roles to manage risk, and risk awareness. The scalability of software systems means that problems can escalate quickly too, he added.

Birchall believes that assigning human responsibility for algorithms, if AI goes wrong or is applied incorrectly or inappropriately, must be about establishing a chain of causality. Ownership brings responsibility, he said.

Birchall suggested that something like an MOT for autonomous vehicles could be a workable solution. AI use cases are narrow, as algorithms handle a well-defined set of tasks, he said.

Monitoring and regulation need to be industry specific, he concluded. For example, financial services AI and healthcare AI raise completely different issues and therefore require different safeguards.

Regulating AI

Birchall offered four suggestions for how AI might be regulated:-

• Adapt engineering standards to AI
• Train AI engineers about risk
• Engage and train organisations to consider the risks, as well as the benefits
• Give existing regulatory bodies a remit over AI too.

Robbie Stamp, chief executive at strategic consultancy Bioss International, reminded the APPG that AI cannot be ethical in itself because it does not have “skin in the game”. Ethical AI governance is all about human accountability, he said.

“As we navigate emergence and uncertainty, governance should be based on understanding key boundaries in relation to the work we ask AI to do, rather than on hard and fast rules,” said Stamp. He flagged up the Bioss AI Protocol, an ethical governance framework that tracks the evolving relationship between human and machine judgement and decision-making.

Automation compromises data quality

Sofia Olhede, director of UCL’s Centre for Data Science, highlighted how automated data collection compromises data quality and validity, leading to biased algorithmic decision-making.

Most algorithms are developed to deliver average outcomes, she said. These may be sufficient in some contexts – such as for making purchasing recommendations – but they may be completely inadequate when the consequences are life-changing or business-critical.

“Algorithmic bias threatens AI credibility and fuels inequalities,” said Olhede, adding that because algorithms learn from the data they have been exposed to, they reflect any human and/or historical bias in that data. And if data is collected ubiquitously, its biases may not reflect societal norms. Therefore, it is important to establish standards for data curation.

Otherwise, for example, a potential bias in favour of those who adopt technology – and therefore produce more data – may impact negatively on other groups, such as the elderly or anyone who makes minimal use of digital systems.

On the subject of ethics, Olhede expressed her hopes for standard-setting. “Many companies are establishing internal ethics boards, but rather than having these spring up like mushrooms, we need common principles about their purpose,” she said.

Achievements versus risks

Tom Morrison-Bell, government affairs manager at Microsoft, highlighted the achievements and potential of AI technology. For example, Microsoft’s Seeing AI app helps visually impaired people to manage human interactions by describing people and reading expressions.

However, he doesn’t underestimate the ethical risks: “Whatever the benefits and opportunities of AI, if the public don’t trust it, it’s not going to happen,” he said.

The debate moved on to whether algorithmic transparency would provide greater reassurance and encourage trust. “Most companies are working to become more transparent. They don’t want AI black boxes,” said Birchall.

“If an algorithm leads to a decision being made about someone, they have a right to an explanation. But what do we mean by an explanation?” asked Olhede, adding that not all algorithms are easily explainable or understood.

Internet of Business says

This, then, is the critical problem. So the underlying question is: how much transparency and control is required to establish trustworthy AI?

As Groves observed, it is possible to trust technology without knowing exactly how it works. As a result, most people need to understand the implications of AI and algorithms rather than the technology itself – rather than whatever is in the black box, in other words. They need to be aware of the potential risks and understand what those mean for them.

This is particularly critical when even scientists and developers in the field don’t understand how some black-box neural networks have arrived at decisions – according to a UK-RAS presentation at UK Robotics Week last year.

Professor Gillian Hadfield, author of Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy, believes we may simply be asking the wrong questions.

“How do we build AI that’s safe and valuable and reflects societal norms, rather than exposing patterns of behaviour?” she asks. “Perhaps instead of discussing what AI should be allowed to do, we should involve social scientists in considering how to build AI that can understand and participate in our rules.”

• The debate took place in a private committee room Parliament on 12 March 2018.

 Joanna Goodman is a freelance journalist who writes about business and technology for national publications, including The Guardian newspaper and the Law Society Gazette, where she is IT columnist. Her book Robots in Law: How Artificial Intelligence is Transforming Legal Services was published in 2016.

More from Joanna on Internet of Business:

Read more: Women in tech: the £150bn advantage of increasing diversity

Read more: Women in AI & IoT: Why it’s vital to Re•Work the gender balance

The post AI regulation and ethics: How to build human-centric AI appeared first on Internet of Business.

Internet of Business

Cash For Apps: Make money with android app

Chelsea Manning: ‘Software developers should have a code of ethics’

How Complete Beginners are using an ‘Untapped’ Google Network to create Passive Income ON DEMAND

Whistleblower, activist, and Senate candidate Chelsea Manning spoke extensively at SXSW about the dangers of unchecked data collection and misplaced trust in algorithms. “The algorithms that I worked on in Iraq have found their way into policing, and also into the way the corporate world works, whether it’s your credit report or advertising data,” said Manning, who was released from prison last May after former President Barack Obama commuted her 35-year sentence for leaking classified intelligence. “All these different tools that we saw being used in one context have found their way everywhere else.”

Manning compared her work on predictive analysis in the Army a decade ago to how she fears modern programmers have approached artificial…

Continue reading…

The Verge – All Posts

Cash For Apps: Make money with android app

The Institute of Electrical and Electronics Engineers Issues Guide on AI Ethics

AI Ethics

The Institute of Electrical and Electronics Engineers (IEEE) has published a second draft of its guide to ‘Ethically Aligned Design.’ As artificial intelligence and autonomous machines become a greater presence in day-to-day life, this document should help ensure that their development is carried out with the proper care and consideration.

Types of AI: From Reactive to Self-Aware [INFOGRAPHIC]
Click to View Full Infographic

“Our goal is that Ethically Aligned Design will provide insights and recommendations that provide a key reference for the work of technologists in the related fields of science and technology in the coming years,” reads the mission statement of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, included in the guide.

The global initiative is comprised of thirteen distinct committees, bringing together several hundred technical and sociological experts from six different continents with specializations in academia, industry, civil society, policy, and government. Each committee focused on a particular component of development, ranging from the challenge of embedding values into autonomous intelligent systems, to the economic and humanitarian issues these technologies will bring about.

Eight committees were established for the first draft of Ethically Aligned Designed, and another five have been added for the second version. There are Affective Computing, Policy, Classical Ethics in Autonomous and Intelligent Systems, Mixed Reality in ICT, and Well-being.

This diverse set of contributors was assembled in the hopes of giving people working with AI and autonomous systems a broad understanding of the ethical and societal implications of their output. Just as technical and sociological perspectives are included, efforts were made to represent various different cultures.

For example, different gestures made by a robot might have different connotations in different parts of the world – engaging in small talk might be desirable in certain contexts, and rather undesirable in others, and eye contact might be seen as polite or impolite depending on the norms of that community. By drawing upon a wide range of sources, this document attempts to stress the importance of taking these considerations into account when developing technology for a global audience.

Laying the Foundations

AI and autonomous systems are already being implemented by various industries, and this is only set to become more prevalent in years to come. As such, it’s crucial that we put certain guidelines in place before this uptick in adoption begins in earnest.

“We’re not issuing a formal code of ethics,” said Raja Chatila, chair of the initiative’s executive committee, in an interview with Inverse published shortly after the original report. “No hard-coded rules are really possible.”

International governments do have a part to play in regulating new technology Joanna Bryson, who served as head of the committee alongside Ron Arkin, acknowledged lawmakers’ role in this process when she spoke to Futurism earlier this year. However, self-governance is also very important, so hopefully Ethically Aligned Design will have the desired effect on future development.

The post The Institute of Electrical and Electronics Engineers Issues Guide on AI Ethics appeared first on Futurism.

Futurism

Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence

A Moral Machine?

As artificial intelligence (AI) development progresses, experts have begun considering how best to give an AI system an ethical or moral backbone. A popular idea is to teach AI to behave ethically by learning from decisions made by the average person.

To test this assumption, researchers from MIT created the Moral Machine. Visitors to the website were asked to make choices regarding what an autonomous vehicle should do when faced with rather gruesome scenarios. For example, if a driverless car was being forced toward pedestrians, should it run over three adults to spare two children? Save a pregnant woman at the expense of an elderly man?

The Moral Machine was able to collect a huge swath of this data from random people, so Ariel Procaccia from Carnegie Mellon University’s computer science department decided to put that data to work.

In a new study published online, he and Iyad Rahwan — one of the researchers behind the Moral Machine — taught an AI using the Moral Machine’s dataset. Then, they asked the system to predict how humans would want a self-driving car to react in similar but previously untested scenarios.

Effectively, Proccacia wanted to demonstrate how a voting-based system could provide a solution to the ethical AI question, and he believes his algorithm can effectively infer the collective ethical intuitions present in the Moral Machine’s data. “We are not saying that the system is ready for deployment,” he told The Outline. “But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI.”

Crowdsourced Morality

This idea of having to choose between two morally problematic outcomes isn’t new. Ethicists even have a name for it: the double-effect. However, having to apply the concept to an artificially intelligent system is something humankind has never had to do before, and numerous experts have shared their opinions on how best to go about it.

OpenAI co-chairman Elon Musk believes that creating an ethical AI is a matter of coming up with clear guidelines or policies to govern development, and governments and institutions are slowly heeding Musk’s call. Germany, for example, crafted the world’s first ethical guidelines for self-driving cars. Meanwhile, Google parent company Alphabet’s AI DeepMind now has an ethics and society unit.

Other experts, including a team of researchers from Duke University, think that the best way to move forward is to create a “general framework” that describes how AI will make ethical decisions. These researchers believe that aggregating the collective moral views of a crowd on various issues — like the Moral Machine does with self-driving cars — to create this framework would result in a system that’s better than one built by an individual.

However, this type of crowdsourced morality isn’t foolproof. One sample group may have biases that wouldn’t be present in another, and different algorithms can be presented the same data but arrive at different conclusions.

For Cornell School of Law professor James Grimmelmann, who specializes in the dynamic between software, wealth, and power, the idea of crowdsourced morality itself is inherently flawed. “[It] doesn’t make the AI ethical,” he told The Outline. “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

For Proccacia, these limitations are valid, and he acknowledges that their research is still only a proof of concept. However, he believes a democratic approach to building a moral AI could work. “Democracy has its flaws, but I am a big believer in it,” he said. “Even though people can make decisions we don’t agree with, overall democracy works.”

The post Crowdsourced Morality Could Determine the Ethics of Artificial Intelligence appeared first on Futurism.

Futurism

DeepMind forms an ethics group to explore the impact of AI

Google's AI-research arm DeepMind has announced the creation of DeepMind Ethics & Society (DMES), a new unit dedicated to exploring the impact and morality of the way AI shapes the world around us. Along with external advisors from academia and t…
Engadget RSS Feed

Stanford’s Final Exams Pose Question About the Ethics of Genetic Engineering

Stanford’s Moral Pickle

When bioengineering students sit down to take their final exams for Stanford University, they are faced with a moral dilemma, as well as a series of grueling technical questions that are designed to sort the intellectual wheat from the less competent chaff:

If you and your future partner are planning to have kids, would you start saving money for college tuition, or for printing the genome of your offspring?

The question is a follow up to “At what point will the cost of printing DNA to create a human equal the cost of teaching a student in Stanford?” Both questions refer to the very real possibility that it may soon be in the realm of affordability to print off whatever stretch of DNA you so desire, using genetic sequencing and a machine capable of synthesizing the four building blocks of DNA — A, C, G, and T — into whatever order you desire.

*2* Stanford Entrance Questions Ethics of Genetic Engineering

The answer to the time question, by the way, is 19 years, given that the cost of tuition at Stanford remains at $ 50,000 and the price of genetic printing continues the 200-fold decrease that has occurred over the last 14 years. Precursory work has already been performed; a team lead by Craig Venter created the simplest life form ever known last year.

The Ethics of Changing DNA

Stanford’s moral question, though, is a little trickier. The question is part of a larger conundrum concerning humans interfering with their own biology; since the technology is developing so quickly, the issue is no longer whether we can or can’t, but whether we should or shouldn’t. The debate has two prongs: gene editing and life printing.

With the explosion of CRISPR technology — many studies are due to start this year — the ability to edit our genetic makeup will arrive soon. But how much should we manipulate our own genes? Should the technology be a reparative one, reserved for making sick humans healthy again, or should it be used to augment our current physical restrictions, making us bigger, faster, stronger, and smarter?

The question of printing life is similar in some respects; rather than altering organisms to have the desired genetic characteristics, we could print and culture them instead — billions have already been invested. However, there is the additional issue of “playing God” by sidestepping the methods of our reproduction that have existed since the beginning of life. Even if the ethical issue of creation was answered adequately, there are the further questions of who has the right to design life, what the regulations would be, and the potential restrictions on the technology based on cost; if it’s too pricey, gene editing could be reserved only for the rich.

It is vital to discuss the ethics of gene editing in order to ensure that the technology is not abused in the future. Stanford’s question is praiseworthy because it makes today’s students, who will most likely be spearheading the technology’s developments, think about the consequences of their work.

The post Stanford’s Final Exams Pose Question About the Ethics of Genetic Engineering appeared first on Futurism.

Futurism

Study Finds That Human Ethics Could Be Easily Programmed Into Driverless Cars

Programming Morality

A new study from The Institute of Cognitive Science at the University of Osnabrück has found that the moral decisions humans make while driving are not as complex or context dependent as previously thought. Based on the research, which has been published in Frontiers in Behavioral Neuroscience, these decisions follow a fairly simple value-of-life-based model, which means programming autonomous vehicles to make ethical decisions should be relatively easy.

Laws and Ethics for Autonomous Cars
Click to View Full Infographic

For the study, 105 participants were put in a virtual reality (VR) scenario during which they drove around suburbia on a foggy day. They then encountered unavoidable dilemmas that forced them to choose between hitting people, animals, and inanimate objects with their virtual car.

The previous assumption was that these types of moral decisions were highly contextual and therefore beyond computational modeling. “But we found quite the opposite,” Leon Sütfeld, first author of the study, told Science Daily. “Human behavior in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”

Better Than Human

A lot of virtual ink has been spilt online concerning the benefits of driverless cars. Elon Musk is in the vanguard, stating emphatically that those who do not support the technology are “killing people.” His view is that the technology can be smarter, more impartial, and better at driving than humans, and thus able to save lives.

Currently, however, the cars are large pieces of hardware supported by rudimentary driverless technology. The question of how many lives they could save is contingent upon how we choose to program them, and that’s where the results of this study come into play. If we expect driverless cars to be better than humans, why would we program them like human drivers?

As Professor Gordon Pipa, a senior author on the study, explained, “We need to ask whether autonomous systems should adopt moral judgements. If yes, should they imitate moral behavior by imitating human decisions? Should they behave along ethical theories, and if so, which ones? And critically, if things go wrong, who or what is at fault?”

The ethics of artificial intelligence (AI) remains swampy moral territory in general, and numerous guidelines and initiatives are being formed in an attempt to codify a set of responsible laws for AI. The Partnership on AI to Benefit People and Society is composed of tech giants, including Apple, Google, and Microsoft, while the German Federal Ministry of Transport and Digital Infrastructure has developed a set of 20 principles that AI-powered cars should follow.

Just how safe driverless vehicles will be in the future is dependent on how we choose to program them, and while that task won’t be easy, knowing how we would react in various situations should help us along the way.

The post Study Finds That Human Ethics Could Be Easily Programmed Into Driverless Cars appeared first on Futurism.

Futurism