The recent acquisition of MobileEye by Intel for approximately $15B tells us the that the age of AI is here to stay. The field is now a viable investment and development opportunity. But what is one to do when the artificially-intelligent device, perhaps a vehicle — the one that is headed toward you versus a crowd of pedestrians — has to make a choice? What are the options and what is the desired end of any decision that the device should make? Discussions of artificial intelligence and ethics will be ongoing for the next several decades.
Now we come to the question: Can morality be coded? If so, how? Or is morality an evolutionary matter? And if so can AI evolve its own morality? These questions and more will haunt us for the next decades as we give computers autonomy over human life. Autonomous cars are only the beginning. Consider the movie Wall-E as not only a statement about the conditions of humans with respect to lack of exercise but also about managed entertainment. Some one or some thing ran the system.
It seems at first that morality is always being coded. The idea that a vehicle should stop and avoid an impact describes the computer’s duty. That seems simple enough. Or is it? The Future of Life team has posed some additional questions, including
So, how do you make an AI that is able to make a difficult moral decision?
That is, as they say, huge. Why? Because it is a combination of history, worldview, need, and a host of other assumptions. What ethical foundation shall we employ? And what ends are the best ends? Are those ends always the best ends, or might they change? How does the human context play into the situation? Can we account for such things as honor and dishonor as ends to the process?
The list might go on forever lest we forget this one thing: The computer is only capable of what it is programmed to do. The core of ethical decisions must be coded, and that includes those variants alluded to above. The computer is no initiating judge of humanity, no should it be.
The author pursues the same question but from a differing framework:
Conitzer articulates the problem by looking to previous decades, “if we did the same ethical tests a hundred years ago, the decisions that we would get from people would be much more racist, sexist, and all kinds of other things that we wouldn’t see as ‘good’ now. Similarly, right now, maybe our moral development hasn’t come to its apex, and a hundred years from now people might feel that some of the things we do right now, like how we treat animals, is completely immoral. So there’s kind of a risk of bias and with getting stuck at whatever our current level of moral development is.”
You can see how worldview plays in. Shall AI accommodate a purely utilitarian worldview? Can the ethics that flow from a Christian worldview be found suitable for the broader public? Should a rationalist set of ethics be imposed on others globally? Is that not a sort of ethical imperialism or ethical colonialism? The issues are of serious concern not only for the theist but also for the progressive. This is so because all worldviews tend to impose upon others. This is technocracy imperialism. I’ll shorten that to techperialism.
Unless we are open to some sort of skynet scenario the scope of what AI is allowed to do it seems that the ethical responsibility that we give to any AI entity must be limited. But if it is not and we come to some point in between then where does moral responsibility lie?
Kris Hammond wrote
The question of robotic ethics is making everyone tense. We worry about the machine’s lack of empathy, how calculating machines are going to know how to do the right thing, and even how we are going to judge and punish beings of steel and silicon.
Personally, I do not have such worries.
I am less concerned about robots doing wrong, and far more concerned about the moment they look at us and are appalled at how often we fail to do right. I am convinced that they will not only be smarter than we are, but have truer moral compasses, as well.
This approach is obviously naive on its face. But the error which is obvious to many is the problem — is there ever a moral vacuum? Can a life-and-death or danger decision ever be made on a morally neutral basis? Should someone design a machine with that framework I suspect the problem will become obvious very quickly.
Then there are Asimov’s three laws of robotics, which are as follows:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
They sound good but there is a serious issue with them — actually several. The one of concern to me is that they tend to be highly individualistic. They’re written about one robot and the danger to one human being. They do not allow for the necessity of lethal defense, as in a defensive war or terrorist threat, when necessary. They do not allow for dangerous situations where there will be death and how to deal with the situation. As a result, the oversimplification found in these principles is inadequate for anything but a novel or a movie.
We’ve seen that even the simplest AI systems have a “moral” code built in. In Facebook, Twitter, and Google it is driven by political philosophy. Though some might rightly argue that their code is not truly AI it none-the-less reflects a moral choice that may be built into even the simplest of filtering solutions. The idea that morality can and is built into code is present with about 50% of the world’s population on a daily basis.
So the final question seems not to be one of the scope of what AI will be allowed to do but what it will be programmed to do and why. The moral responsibility lies with us as developers, designers, and deployers. We must be cautious that the materialism of the age does not reduce humanity to machines or raise machines to humanity lest we convert the cultural and economic imperialism of the 19th century to a new economic-techno-imperialism in the 21st.
AI are being trained at this point to imitate humans. This is creating big problems because online people engage in many anti-social behaviors.
After much thought, I’ve decided that we should to them ethics and morality by having them imitate God. God is good, just, loving, faithful, true, righteous, holy, and allows men to make their own choices.
Of course, most in AI development are ill prepared for that discussion.