Is It Possible for AI to Be Ethical?

(L to R): Steve Leonard, SGInnovate Founding CEO; Richard Koh, Chief Technology Officer, Microsoft Singapore; Yeong Zee Kin, Assistant Chief Executive (Data Innovation and Protection Group) of IMDA and Deputy Commissioner of the Personal Data Protection Commission; and Dr David Hardoon, Chief Data Officer, Data Analytics Group, Monetary Authority of Singapore

Given the complexities of human morality, it might be impossible to design ethical AI; but developers should nonetheless always be alert to potential biases.

The rise of artificial intelligence (AI) has made the classic trolley problem in ethics a favourite at tech conferences—but in an updated form. In the context of driverless vehicles, the issue arises when the vehicle’s AI algorithm is faced with a dilemma of whether to hit a young child or an elderly person. What does it mean to code such a decision into a computer algorithm? What is the ‘best’ choice?

“In such a lose-lose situation, I personally don’t know how even a human can make a so-called ‘best choice’,” said SGInnovate Founding CEO Mr Steve Leonard, who moderated a panel discussion.

The panel comprised experienced industry and regulatory professionals: Mr Richard Koh, Chief Technology Officer of Microsoft Singapore; Dr David Hardoon, Chief Data Officer and Head of the Data Analytics Group at the Monetary Authority of Singapore; and Mr Yeong Zee Kin, Assistant Chief Executive (Data Innovation and Protection Group) at the Infocomm Media Development Authority of Singapore and Deputy Commissioner of the Personal Data Protection Commission.

Mr Yeong, whose work includes developing forward-thinking governance on AI and data, concurred with Mr Leonard, saying, “When we obtained our driver’s license, we were never asked to answer such a question. So why should we expect an AI to be able to answer it ‘correctly’?”

However, he also pointed out that there is a certain level of responsibility for companies producing the driverless cars. “At the end of the day, the system is designed by humans. If there are known high-risk scenarios involved, we shouldn’t be leaving it to AI models to make decisions based on data sets. The solution? Design it such that you can be in control—and manage the risk by narrowing the window for it to occur.”

For example, the autonomous car could be designed to slow down when entering the school zone, Mr Yeong suggested. “This way, you wouldn’t have the question about whether it can stop in time, or which individual to hit,” he said.

Ethics is Contextual

Although the humans who develop AI are partially responsible, decisions are far from clear-cut as different humans may reach different conclusions about what is ethically right, Dr Hardoon said, citing an experiment dubbed the Moral Machine, which was launched by researchers at the Massachusetts Institute of Technology in 2016.

Instead of asking AI systems to choose between running over a young person or an old person, a survey went out to 40 million people around the world. “What they found was that [people from] Asian cultures would choose to hit the younger person. I am not saying that it is right or wrong, but given the position of a choice, that was what they chose. However, for the Americans, the results were the exact opposite.”

Kareyst Lin
Topics:A.I.,Data Science / Data Analytics,Others

To read the full article click on the link below

Is It Possible for AI to Be Ethical?‍

Download

Download

Posted by

Anna Arrol

Administration Assistant

Join the NZHIT network

The strong network of NZHIT members work collaboratively to provide solutions to enable the health and wellbeing of people living in New Zealand and Globally