Will Robots Ever Deserve Moral and Legal Rights?
Twenty-one years ago (February 10, 1996), Deep Blue, an IBM Supercomputer, defeated Russian Grand Master Gary Kasparov in a game of chess. Kasparov ultimately won the overall match, but a rematch in May of 1997 went to Deep Blue. About six years ago (February 14-15, 2011), another IBM creation named Watson defeated Champions Ken Jennings and Brad Rutter in televised Jeopardy! matches.
The capabilities of computers continue to expand dramatically and surpass human intelligence in certain specific tasks, and it is possible that computing power may develop in the next several decades to match human capacities in areas of emotional intelligence, autonomous decision making and artistic imagination. When machines achieve cognitive capacities that make them resemble humans as thinking, feeling beings, ought we to accord them legal rights? What about moral rights?
This is not idle speculation. A draft motion considered by the European Commission last summer has already proposed “that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations.”
Philosophers have proposed different criteria for determining if a being has a moral status deserving of rights. Though specific accounts differ, the more complex cognitive capacity of autonomy and the simpler cognitive capacity of sentience have both been proposed as grounds for moral status. Immanuel Kant argued that it was the possession of autonomy—the capacity to set ends via practical reasoning—that grounded an individual’s right to be treated as an end-in-itself, rather than as a means for someone else’s ends. Though it is difficult to set clear delineators between autonomous and non-autonomous behavior, a being with Kant’s autonomy would be able to decide for itself what life it wanted to live and what values it held.
Even the most sophisticated artificial intelligences today do not possess autonomy in this sense. No one expected Watson to announce in the middle of the show that it no longer wanted to be on Jeopardy! and instead planned to become a French pastry chef. However, autonomy may also come in degrees, rather than as an all-or-nothing affair, and this may present the most vexing moral and legal challenges. Artificial intelligences already display limited autonomy-like capacities in specific domains. Driverless cars, for example, must make decisions on the road that have serious moral implications. It may be beneficial to consider sliding-scale models of moral status to account for artificial intelligences with different levels and kinds of autonomy.
The other primary criterion for moral status is sentience—the ability to feel pleasure and pain. Pain and pleasure should be understood broadly here to encompass not just the feelings produced in our bodies from the activation of pain and pleasure sensors, but also psychological phenomena, such as the experience of having one’s desires satisfied or frustrated. Driverless cars may be said to have limited autonomy in making traffic-related decisions, but no artificial intelligence researcher is claiming that driverless cars care about whether they end up in a wreck. Sentience appears especially relevant to moral status, because sentience appears necessary for an individual to have subjective interests in how well its life goes.
If harm is a central concept for morality, and individuals who cannot feel pleasure or pain cannot be harmed, then the attribution of moral rights to them would do no good. While a fully autonomous individual in the Kantian sense would seem to also be a sentient individual, those artificial intelligences with limited autonomy do not necessarily need to be sentient. Sentience for robots, however, is not a mere fantasy. Researchers from Germany are working on developing artificial neural networks to enable robots to simulate physical pain sensing systems.
One final concern worth discussing is whether computers can achieve the aforementioned cognitive capacities, or at most only imitate them. If, for example, the capacity for autonomy grounds moral status, then a being that could only imitate autonomy would not truly be autonomous and, therefore, not deserve moral status. It is an open question among both computer science researchers and philosophers whether it is possible to program a digital computer to have authentic cognitive states, rather than mere simulations of them. Without scientific or philosophical consensus on this question, foundational debates as to whether increasingly sophisticated robots deserve moral respect will continue.