Without further ado, let’s assume the AI has all information available and imagine a debate with the world’s foremost human intellectual. The AI will have a flawless argument that takes into account all the information in a superhuman fashion a human intellectual can never attain.
Its rational reasoning is far superior. Recall the 2016 victory of Alpha Go against ruling world champion Lee Sedol. In 2017, the new version of the AI surpassed the old one by a huge margin, and people like to believe that it has a sense of inventiveness or creativity. Can we rely on that superior reasoning for ethical problems as well?
What if we surrender our autonomy to the AI when it comes to ethical decision making (for example concerning preventive air strikes, euthanasia, abortion, animal testing, and all the other moral issues of the day)? The (human) imperative to do so is that the AI is shown to deliver the better argumentation virtually all the time. So, it would be a-rational (limiting our capacity of rational decision making) if we deny the AI the autonomy of dictating our ethical decisions to us.
Hold on a minute. Isn’t this ethical autonomy precisely the irrational factor that we must claim for ourselves, for the sake of our human identity? Isn’t it the root of our morality that can never (and should never) be rationalized? Isn’t this the point where Enlightenment could revert to its own opposite, as Adorno and Horkheimer said? Is this the limit of rational ethics?
So, if we follow the AI we would be surrendering our autonomy and hence the essence of our ethical behavior; we would just be uncritically executing what we believe to be more rational and thus more ethical. The AI would be a religious system with perfect priests and no need for exegesis of the sacred text. Moral truth would be decreed by the AI, and disobeying means the original sin of irrationality. If we don’t follow the AI we would intentionally defy rationality since we had established the superior rationality of its arguments. One way or the other, our ethical discourse will be compromised.
If we don’t want to give up our claim to rational ethical agency, we will have to think hard about artificial minds far superior to human minds. That is the moral dilemma of AI.