Balancing Human Ethics And Artificial Intelligence by Jennifer L. Schenker

By | January 23, 2017

(This article first appeared in the 2017 Techonomy print and digital Magazine.)

You are driving along and your car’s brakes suddenly fail. If it swerves to the left, three old men and two elderly women will die. If the car veers to the right, it kills a woman doctor, two babies and a boy and girl.

Who should die? This question is part of MIT Media Lab’s “moral machine,” a platform for gathering peoples’ opinions on moral decisions made by machine intelligence, such as self-driving cars. In the coming age of automation and artificial intelligence (AI), such life and death decisions and many other complicated ones will increasingly be made by machines rather than people.

A lot depends on who determines the value systems for artificial intelligence software. Those values could be carefully and methodically crowd-sourced from society at large or could just reflect the ethics of an overworked programmer racing to meet a product deadline. Since he or she is likely to work at a company that answers to investors, the outcome may not be what we could consider socially responsible. “We should not let Silicon Valley be the mission control for humanity,” argues futurist Gerd Leonhard, author of a new book called Tech versus Humanity: The coming clash between man and machine.

If autonomous AI software, crunching data far more rapidly than humans, can help eradicate disease and poverty and introduce societal improvements and efficiencies, then we must embrace it, Leonhard says. But “at the same time we have to have governance. And right now there is no such thing.” He and others are pushing for human values to be codified into the design of AI systems.

“Programmers and systems need to implement ethical standards from the operating system level up,” says John C. Havens, author of Heartificial Intelligence: Embracing Our Humanity to Maximize Machines. Havens is executive director of the Global Initiative for Ethical Considerations in the Design of Autonomous Systems, formed in 2016 by the IEEE, a large association for engineers, to help incubate new AI standards.

“The Internet of Things is accelerating the introduction of AI autonomous systems,” says Mary Ward Callan, who directs technical activities for the IEEE, referring to things like self-driving cars and systems that optimize traffic and other systems in cities, for example. “As the IEEE’s mission is to advance technology for the benefit of humanity, we need to determine how we can design in aspects of our human decision process. We have not yet constructed that model.”

In Havens’ book he warns that if ethics are not baked into AI systems, algorithms simply seeking to fulfill their goals may cause harm. The dark side of AI is increasingly in the spotlight. In his own book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom of Oxford University imagines an AI that has been programmed to make as many paper clips as possible. It ruthlessly transforms all of Earth and then even increasing portions of outer space into paper clip manufacturing facilities.

Bostrom’s book helped inspire Elon Musk, the CEO of Tesla and SpaceX, to say that AI “is potentially more dangerous than nukes.” Musk, physicist Stephen Hawking and others in the scientific and tech community signed an open letter last year calling for a ban on autonomous military weapons and for work to ensure that AI systems are beneficial to humanity.

A number of efforts are underway to conduct research and education on the complex challenges facing a world heading towards widespread AI. Musk is one of the co-founders of OpenAI, a research institute that plans to spend more than $1 billion to steer AI in a positive direction. Meanwhile, Google, Amazon, Facebook, IBM, and Microsoft have formed a partnership with a similar goal. And Stanford University has launched the One Hundred Year Study on Artificial Intelligence, aiming to publish a report on the societal impact of AI every five years for the next century.

Discussion on ethics and AI is not limited to the tech community. A day was devoted to “man-machine convergence” at the giant 2016 Sibos financial services conference organized by Swift, a global bank cooperative. Banks already use robo-advisors and will implement more autonomous AI systems. Says Peter Vander Auwera of Swift: “With the growing tension between technology and humanity we need to think through the digital ethics dimensions of an algorithmic economy for financial services.”

Science fiction writer Isaac Asimov formulated “Three Law of Robotics”: a robot may not injure a human being or, through inaction, allow a human being to come to harm; A robot must obey orders given it by human beings except where such orders would conflict with the first law. It sounds good, but we’ll need more sophisticated rules than that, many experts have concluded. The Partnership on AI has proposed its own eight new tenets for people developing the technologies. They include “ensure that AI technologies benefit and empower as many people as possible,” “protect the privacy and security of individuals,” and “remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.”

“Machines have to understand complex change and consequences if they are going to be empowered with decisions,” says Dr. David Hanson, founder of Hanson Robotics. His company aims to make “friendly and empathic” robots. Its home page optimistically predicts: “In the not-too-distant future, Genius Machines will walk among us. They will be smart, kind, and wise. Together, man and machine will create a better future for the world.”

But Hanson himself warns that “to understand ethics, machines will have to understand not just the big picture and patterns but the human heart.” Yet that is challenging enough for mere people. And we often don’t behave in ways consistent with the values we profess. Heartificial Intelligence author Havens asks “How will machines know what we value if we don’t know ourselves?”

If we are going to codify human values into the intelligent systems that surround us, we must have a wider societal discussion about what our common values actually are. There won’t be unanimity, so we will have to develop ways to move forward, even with uncertainty. “I don’t have great faith in the consistency of the human value system,” says AI expert Dr. Ben Goertzl. “By experimenting with AI and ethics in simple situations we will learn more about this topic and what to do. Pontificating in the abstract is not going to be useful.”

But the prospect of a national and even global dialogue about what machines should and shouldn’t do–what, in effect, are the bedrock behavioral values of mankind–could ironically subvert the oft-articulated argument that machines are taking us away from ourselves. It’s one thing to spend time staring at a smartphone instead of talking to the family at dinner. But it’s quite another to be forced to say what we value most in order for technologists to proceed. Perhaps some will argue we needn’t make such decisions; it would be simpler, perhaps, to just return to the forest.

One more practical solution might be something called society-in-the-loop artificial intelligence, a concept developed by Iyad Rahwan, an MIT Media Lab associate professor. Rahwan is polling the public through that online test to find out what decisions people would want self-driving cars to make. The idea is that through such polls we can train machines to behave in ways people feel fairly reflect their values, much as we agree to allow elected government officials to represent us.

Joi Ito, director of the MIT Multimedia Lab, argued in a recent essay that if this works, human judges could eventually be replaced by AI for legal decisions like bail and parole. But, he says, “this will most likely require making the tools of machine learning available to everyone, having a very open and inclusive dialogue and redistributing the power that will come from advances in artificial intelligence, not just figuring out ways to train it to appear ethical.” We remain very far from that now.

Meanwhile, further complicating the move towards regulating and restraining our most powerful technologies is the controlling role often played in their deployment by for-profit corporations, including behemoths like Google or Facebook. It’s nice they are professing concern about AI’s impact, but they face other related challenges in how their work affects society and retaining our trust. The algorithms underlying their search and newsfeed software could, for example, be programmed to swing the results of elections almost anywhere in the world. Nobody would know unless a rogue employee turned whistleblower.

Thinking very big, AI expert Goertzel argues that the issues of trust around AI could lessen over the long term as people become cyborgs and the divisions between man and machines begin to blur. “The way we think about ourselves will change,” he says. “Once an iPhone is inside your head and becomes a part of you and you start networking with other people and robots, there will be less of an ‘us versus them mentality’.” Some may not be consoled by this conclusion.

Author Leonhard argues that the key for now is to better understand where tech ends and where humanity starts. “We need to define what makes us human and decide what should be automated and what should not be,” he says. “We must embrace technology, but we must not become it.”

Jennifer Schenker is editor-in-chief of The Innovator, and a senior contributor to Techonomy.