Ethics in relation to robotics is a hot topic, and after reading an article (in Spanish) which speaks about it I thought I’d blog answering to it with what I think about the topic.

First of all, a brief summary of what the article says, so that you know what I’m actually talking about. There are many sorts of robots, but let’s focus on those designed for the home, which should be «humanoids» (ie, robots which look and behave like humans) to make it more easy for people to accept their presence. Those will be social robots: they will need to understand our language (including non-verbal communication, like gestures, facial expressions, etc.) and be able to answer in the way we expect them to (they will have a human-like face, a speech syntesise engine and [simulate] emotions). Now, if a robot can develop cognitive (he can “think”) and affective (emotions) activities, this raises the important question of which place it [or should I say «he»?] will take in society. Could destroying or harming a robot be considered homocide? Wouldn’t replacing an old robot with a better one be morally unacceptable? Isn’t it cruel to create masses of feeling robots which will end up being destroyed?

Before I answer this questions, I want to explain you something. Some years ago I stumbled on Dr. Abuse (page in Spanish), a chat bot with quite good artificial intelligence which simulates being a psychologist. One of many curious things about it is that when it felt that you dislike it, Abuse begged you to not uninstall it. I’m sure any sensate person would call me a fool if because of that I’d keep that installation around forever. Don’t you also think that being considered a criminal because of deleting that AI program would be nonsense?

Now, back to the humanoid robots. If we think about it, they would be basically nearly the same as Dr. Abuse, just a bit more advanced and with some cool hardware (which is not more than metal and plastic). So, why on Earth might it possibly be unethical to deactivate (or destroy, there isn’t any difference) such a robot? And how can someone even thinking of considering a program a citizen (yeah, the above quoted article also mentions this), if it just comes down to some thousands of lines of instructions written by programmers?

Of course we can also look at this from a different perspective, like a robot owned by a family where some of its members may develop an affective relation towards it (not unlike they would love their pet). In this case, if for example the parents decided to replace the robot by a newer model, the kids may get sad (remember, the bot is supposed to look human-like, be intelligent and simulate emotions), but this is something completely different and has nothing to do with robots having rights, and neither is this a reason to have laws protecting them.

Looking at it yet from another side comes the question about which software the robot would have. I haven’t heard anything in regards of this, but IMHO it is the most important thing we should worry about. Such a domestic robot would necessarily gather loads of information about our personal lives, so there is a critical point: will the code be Open Source, to let us ensure that it doesn’t send any information to the manufacturer or someone else, that it hasn’t any potentially dangerous errors or even that it isn’t evil (to put a stupid and exaggerated example, «if bill_not_paid and time.time() < max_time_to_pay: hit_owner()», but getting a bit more serious, imagine that the robot would obey any representative of the government…). Now I want to hear your conspiracy theories :).

Those are just some random thoughts which I wanted to write down… If you disagree, please let me know your view on this!

Note: By the way, if you understand Spanish and are interested in technologic/cientific news, have a look at the magazine which published the article I mentioned, Tendencias21. I’m following it already for a few years now and I have to say they write about lots of interesting stuff!