Would it be morally wrong to kill a sentient machine?

156
1
0
11 November
18:06
November
2016

The question as you pose it is actually quite worrying. You could have asked: could there be any circumstances under which it would be morally okay to kill a sentient machine? If you were talking about humans, people would be shocked if you asked if there was any reason why it would be morally wrong to kill a human. 

The mere fact of artificial origin doesn’t seem to have any relevance one way or the other. But I guess in self-defence, and so on, and I wouldn’t want to draw a line through every case of killing in war.

In general, though – yes, the threshold should be the same for whether it’s every okay to kill a human. We are sentient machines, though our mechanical makeup is certainly very different from that of current robots - though it may be not that different from the sentient robots we’re imagining, because in order to be fully sentient, they’d have to have bodily constitutions which are much closer to ours than those of current robots.

The other problem here is that there’s a spectrum of possible robots, from the very, very crude ones we have today, to the synthetic biological creatures which may in future be created in the lab, and gestated very much in the way that human babies are, maybe in some kind of artificial womb, or even implanted in human wombs, and raised from infancy upwards through the stages that humans do as they mature. So as you think through these scenarios, you realise that the difference between them and us gets vanishingly small.

"There’s a spectrum of possible robots, from the very, very crude ones we have today, to the synthetic biological creatures which may in future be created in the lab"

I don’t think there’s any hard and fast threshold. I think there’s a whole set of issues which will have to be decided, to do with rights, and status in society. Even in the case of robots which aren’t configured to be sentient, there might be situations where we might want to give them certain sorts of rights, for example property rights. 

I thought the film ‘Bicentennial Man’ was a really good forum for this sort of discussion – it was based on a short story by Isaac Asimov, and there’s a point at which the central character, a robot played by Robin Williams, is granted the right to own property, and the reason for that is that the robot is able to make exquisite objets d’art which his owner sells for a lot of money – and his owner doesn’t think he has the right to keep the money, so he sets up a fund for the robot, who then uses the money to buy a house. It’s not very clear whether the robot is sentient, but it is highly intelligent – the question being whether rights to property may be independent of sentience. There may be points at which rights are given to robots even if they don’t have sentient feelings.

"I thought the film ‘Bicentennial Man’ was a really good forum for this sort of discussion"

There is a spectrum, and if we think about the two ends of it, that’s relatively easy. At one end, they’re very much unlike us, and at the other they’re very, very much like us. And then there’s this massive area in the middle, where there are lots of different cases, and where it’s very difficult to know what to say about each of them. It’s a highly contested discussion which is going to be with us for decades to come.”

0
0
If you know an answer to this question and can provide supporting arguments, express yourself!
Answer
Choose an expert