In their work on “Markets without limits“, Jason Brennan and Peter Jaworski argue that money does not change the morality of an act. So, for example, if Alfred and Berta exchange a Rolex watch as a gift, and that transaction is entirely within the limits of morality (i.e. the good has not been stolen etc.), selling the good does not change the morality of this act. Brennan and Jaworski use this argument against those who think that there are some things that cannot be sold. There are, to be sure. But the reason for this does have nothing to do with the fact that the good is sold. It has to do with the morality of the act or the good itself. The simple rule or principle that Brennan and Jaworski establish is: “If an act X is morally okay as such, it remains so if act X is done for money”, or, in their words: “If you can do it for free, then you may do it for money” (Brennan & Jaworski, 2015, p. 10). If giving one’s kidney to someone else so that s/he can survive is okay, selling the kidney is okay, too. If it is permissible to give one’s child to someone else so that it can be raised by these people, it is also permissible to charge these people for giving them one’s child. It is not okay to give child porn as a gift (or to produce it or to possess it), so it is not okay to sell it. “But the problem with these markets isn’t the markets themselves—it’s that the items for sale should not be possessed, period” (p. 11).
Now, I am wondering if the spirit behind this argument – morality determines the permissibility of an act, not whether the act is a commodification of a good, a person, etc. – can be applied to other domains, too. To be more precise, I wonder if many questions surrounding the ethics of automation (e.g. roboethics, artificial intelligence) can be fruitfully answered by using a heuristic similar to Brennan and Jaworski’s.
To illustrate, let’s consider this excerpt from a blog post in “The Verge” where author Russell Brandom discusses the ethics of the Trolley Dilemma and its application to self-driving cars. In the Trolley Dilemma, a runaway trolley is headed towards a group of five people who happen to be on the trolley’s tracks and who will die in the crash if nothing happens. However, if you pull a switch, you divert the trolley so that it is headed towards one person on another track who will die instead of the five others. If you pull the switch, you save five lives, but sacrifice one. Reviewing a variety of alternative Trolley scenarios, Brandom concludes that this dilemma cannot and should not be used to program autonomous cars. He says:
“In a very literal sense, we would be surveying the public on who they would most like to see hit by a car, and then instructing cars that it’s less of a problem to hit those people. The test is premised on indifference to death. You’re driving the car and slowing down is clearly not an option, so from the outset we know that someone’s going to get it. The question is just how technology can allocate that indifference as efficiently as possible. […] If this is our best approximation of moral logic, maybe we’re not ready to automate these decisions at all.”
One might have many things to say about this quote, but I want to emphasize just one: Brandom says that “horrifying moral choices”, such as the one between letting five persons die or just one, cannot be automated. It is not so much, it seems, that Brandom thinks there is something morally wrong with, for instance, pulling the switch. Rather, his point is to say that choices of this tragic kind cannot be automated. One reason, according to Brandom, is that programming a Trolley Dilemma decision into a car is teaching cars to be indifferent to death. This, however, is not the way morality works, adds Brandom in another paragraph.
The choice between the five and the one is absolutely tragic and most of us wish to never face such a choice. However, judging the situation wherein a tragic decision needs to be made is to be kept separate from judging the decision itself. My intuition, shared with many others, is that it is of course permissible, if not mandatory, to pull the switch. But what does this have to do with whether or not the decision is a human one or made by an autonomous car?
Is it the fact that a car acts unemotionally (indifference to death), whereas morality works on the basis of emotions? But does an act assume a different normative attitude depending on the psychological outlook of the agent? Well, it seems it does. If Alfred kills Berta intentionally (intentions are part of a person’s psychological outlook) and gets caught by the police, he gets a different sentence compared to when he accidentally kills Berta (murder vs. manslaughter). However, this example cannot be compared to the Trolley Dilemma and autonomous cars. To make an adequate comparison we would need to see whether killing someone without any emotion or intent at all and killing someone out of anger, passion or otherwise intentionally is normatively different. And it seems that this is not the case. A psychopathic killer is normatively indistinguishable from an emotional killer. Moreover, the result in the murder-vs-manslaugher case and the psychopath-vs-passionate-killer case are identical: In both cases, we have to mourn the death of a person. And we do not make a difference in our mourning, not even in judging the situation as pitiful, regrettable and sad.
This being said, I wonder whether we need to approach the ethics of automation in much the same way as we approach the morality of markets. Is there a principle, similar to Brennan and Jaworski’s, that says: “If an act is morally okay if done by a human being, it is also okay if it is done by a robot”?
If, for instance, saving five (and letting one die) is morally okay if done by a human driver or, for that matter, if this it is okay if we teach our children to act accordingly, it is okay if we program autonomous cars accordingly. If providing care for elderly people by paying uninterested, money-seeking people, it is okay to do so by using a carebot. If making loan-related decision (who should get a loan, and to what conditions?) is okay if made by humans analyzing a variety of data, it is okay if it is done by an algorithm? We might not like the situation itself, e.g. that people have to apply for loans, that not everybody can get one due to the scarcity of resources or because banks want to minimize risk, and that people might suffer from inequality of chances. But if such decisions are still morally acceptable if made by humans, they can also be made by algorithms. If, by contrast, the practice of risk-reduction in making loan-related decisions were morally reprehensible, even prohibited, it would also be prohibited if made by a robot.
So, does this make sense? Are there cases where automation figures in the explanation for why a certain act is immoral? Can the arguments Brennan and Jaworski put forward against the potentially corrupting effect of markets be applied to criticisms of automation, too? Is Brandom’s argument an exception or can it be be found elsewhere, too? In other words, is it safe to say that people in the ethics of technology regularly argue in an untenable way – just as people in business ethics argue in an untenable way that markets corrupt otherwise morally perfect action?