Ethics in relation to robotics is a hot topic, and after reading an article (in Spanish) which speaks about it I thought I’d blog answering to it with what I think about the topic.
First of all, a brief summary of what the article says, so that you know what I’m actually talking about. There are many sorts of robots, but let’s focus on those designed for the home, which should be «humanoids» (ie, robots which look and behave like humans) to make it more easy for people to accept their presence. Those will be social robots: they will need to understand our language (including non-verbal communication, like gestures, facial expressions, etc.) and be able to answer in the way we expect them to (they will have a human-like face, a speech syntesise engine and [simulate] emotions). Now, if a robot can develop cognitive (he can “think”) and affective (emotions) activities, this raises the important question of which place it [or should I say «he»?] will take in society. Could destroying or harming a robot be considered homocide? Wouldn’t replacing an old robot with a better one be morally unacceptable? Isn’t it cruel to create masses of feeling robots which will end up being destroyed?
Before I answer this questions, I want to explain you something. Some years ago I stumbled on Dr. Abuse (page in Spanish), a chat bot with quite good artificial intelligence which simulates being a psychologist. One of many curious things about it is that when it felt that you dislike it, Abuse begged you to not uninstall it. I’m sure any sensate person would call me a fool if because of that I’d keep that installation around forever. Don’t you also think that being considered a criminal because of deleting that AI program would be nonsense?
Now, back to the humanoid robots. If we think about it, they would be basically nearly the same as Dr. Abuse, just a bit more advanced and with some cool hardware (which is not more than metal and plastic). So, why on Earth might it possibly be unethical to deactivate (or destroy, there isn’t any difference) such a robot? And how can someone even thinking of considering a program a citizen (yeah, the above quoted article also mentions this), if it just comes down to some thousands of lines of instructions written by programmers?
Of course we can also look at this from a different perspective, like a robot owned by a family where some of its members may develop an affective relation towards it (not unlike they would love their pet). In this case, if for example the parents decided to replace the robot by a newer model, the kids may get sad (remember, the bot is supposed to look human-like, be intelligent and simulate emotions), but this is something completely different and has nothing to do with robots having rights, and neither is this a reason to have laws protecting them.
Looking at it yet from another side comes the question about which software the robot would have. I haven’t heard anything in regards of this, but IMHO it is the most important thing we should worry about. Such a domestic robot would necessarily gather loads of information about our personal lives, so there is a critical point: will the code be Open Source, to let us ensure that it doesn’t send any information to the manufacturer or someone else, that it hasn’t any potentially dangerous errors or even that it isn’t evil (to put a stupid and exaggerated example, «if bill_not_paid and time.time() < max_time_to_pay: hit_owner()», but getting a bit more serious, imagine that the robot would obey any representative of the government…). Now I want to hear your conspiracy theories :).
Those are just some random thoughts which I wanted to write down… If you disagree, please let me know your view on this!
Note: By the way, if you understand Spanish and are interested in technologic/cientific news, have a look at the magazine which published the article I mentioned, Tendencias21. I’m following it already for a few years now and I have to say they write about lots of interesting stuff!
If you haven’t already, read through Isaac Asimov’s *Caves of Steel*, *The Naked Sun*, and *The Robots of Dawn* series. Specifically, how he develops the 3 Laws of Robotics. And, in *The Robots of Dawn*, the topic of “robocide” is taken up…
Veig que el ministeri de sanitat encara no ha fet prou ènfasi en les campanyes per evitar l’auto-medicació…:-P
(I won’t translate it to english, as you might not get it…:-D)
Jay: Yeah… If those are some of his short stories, I may have read them already, but now that you say it, I guess I should read some Asimov again… His stories are great, but I’ve already forgot most of them ^^.
papapep: Això és la venjança per haver omplert el teu bloc de comentaris? :P
You also might want to check out Rudy Rucker’s Software, Wetware, Freeware, and Realware tetrology. Robots > *
I’ve always said the difference between i, Robot and the Bicentennial man is the license of the source code.
As for machines, well, before we treat them differently than we treat ourselves, we should prove the existence of the soul.
i think its interesting that you distinguish the robot from us humans by the fact that its a program, “thousands of lines of instructions written by programmers”. In a sense we to are written of thousand (ok, millions) of lines of code only in the form of DNA.
I think the “programming” argument is a week argument in that sense.
tommo:
Yeah, I thought about that but nevertheless I don’t consider a program (written in C, Python or whatever) to be the same as an animal (even if DNA could pass as an advanced form of coding).
Perhaps I should reformulate that… For other arguments, there is of course the soul argument, as ethana2 says (I was hoping for someone to mention it :P), but I’ll skip because of the lack of proofs. I also have a few other ones, but I haven’t thought about them enough to write them down yet… If you know some, I listen :). Anyway, it’s late and I should go sleep now :).
Thanks for commenting!
The question is: is the robot truely seintent?
If the answer is “yes”, then the robot is a person and should be treated as such. If the answer is “no”, then it is not and should not.
I strongly disagree with you (intuitively) but can’t quite point out the reason… Here are some attempts to find the reason.
First of all, I don’t know much about “souls” or what a soul would be; IMO it’s more important whether a being is “conscious” and has own feelings and thoughts. A human falls into that category, a pet probably as well, a thinking robot certainly as well. Maybe it boils down to this: a thinking being creates a world of its own in its mind, and I think it’s unethical to destroy that world.
Btw. a robot as mentioned in your quoted text would certainly pass the Turing Test (ie. in a conversation, it would not be distinguishable from a human). Think about people you known on IRC: would you deny them their right to live? Certainly not :-) . Would you deny them their right to live if they turned out to be robots? What would be the difference between someone made of flesh and someone made of metal?
Which of course makes the initial problem not easier… If a household appliance today gets old and useless, it’s thrown away. You can’t do this with thinking robots – but OTOH we also don’t throw away old people because they are “useless”…
Well, the point is that I do *not* consider a robot as a living being. A bunch of metal+plastic+wires together with an [b]Artificial[/b] Intelligence software (which does [b]simulate[/b] emotions) is by definition artificial, and thus not “real” live.
I bacteria is very simple, but it is a living being; an ant (which is way more complex, but still insignificant compared to us) is also a living being, etc. But an AI program (even if it passes the Turing Test) is artificial, and a robot (although being quite more complex) doesn’t stop being artificial.
I have to say that this is probably highly influenced by myself being a programmer. I have written some (simple) IRC bots… Those didn’t really include AI (well, at least no capability to learn by themselves), but even if they did I couldn’t consider them as anything else than a bot, and if they would express emotions then those would be not more than what they are (a *simulation* of emotions). So all of this comes down to what Michael said: “can robots be considered sentient?”.
We can also approach the problem from another perspective. Let’s agree for a moment in that robots could be “alive”. In this case, we (the humans) have created those robots, so aren’t we their gods? And can’t a God do what he wants to its creations? (To avoid misinterpretations, animals weren’t designed by humans, and I think the consensus is for genetic engineering to be considered unethical so don’t come with that :P). Well, perhaps I should leave out this last paragraph, not sure if it makes any sense at all :P.
[And yes, I have the same problem as you… All this is way more elaborated in my head but I’m unable to properly write it down].
Ok, if the emotions are really simulated it’s a different thing; but if the robot can actually think (self-consciousness), it is some kind of living being and should get an “ethical” treating. Not sure where the border is or what constitutes “thinking”, though. I don’t consider a Nintendo DS to be not sentient, even though Dr. Kawashima can make a sad face. A robot that passes the Turing Test OTOH could be sentient (not sure if there’s a similar “sentience” test). But whether the being is artificial or natural doesn’t matter there IMO.
Btw. about the god-like thing: that would be a very cruel god that first creates sentient beings and then destroys them – it seems somewhat unethical :-)
Yeah, so the real questions are: How do we separate real and simulated emotions, and can robots even feel real emotions? And when can a robot be considered sentient?
I feel that robots are not human. Scientists can make robots have human characteristics as: communicating, walking upright on two legs, able to remember everything another human said, follow the 3 laws of robotics by Asimov,dress like a human, even have human emotions all which are programmed into the robots brain by a human. Indeed the robot can function longer than a human can and doesn’t need air, shelter, food & water or sleep to revive itself, as does the human race. The robot was not born from a live human entity and has no DNA. Therefore it is not human. I cannot imagine falling in love with a robot or even have some type of sex with a robot this just seems sick to me. But I guess that if you are lonely enough without human contact for a long period of time being tempted toward companionship might happen.