(Disclaimer: Couldn't get videos to work but links will open up in another tab.)
Isaac Asimov's 3 Laws of Robotics are bound by the etymological (study of the meaning of words) definitions of words found within them. Furthermore, different unanswered dilemmas or moral questions must be programmed into the robot, thus making it susceptible to human subjectivity. For these reasons, the 3 Laws of Robotics are more complex than they initially appear and should not be used as a means of controlling robotic behavior.
The 3 Laws of Robotics are stated as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you take the meaning of 'harm' or 'injury' to not only be physical, but also psychological, then for a robot to do the utilitarian calculus in determining how not to injure a human being, it would need a strong grasp of the nuances of human psychology. More importantly, it may be true that no matter what the robot does, it will either injure a human being or allow it to come to harm through inaction.
In this case, Isaac Asimov's First Law of Robotics cannot be used in forming a decision. If the robot were to pull the lever, it would divert the trolley track and kill the single person, violating the, "A robot may not injure a human being," clause. However, by not pulling the lever and allowing 5 people to die, it violates the clause, "or, through inaction, allow a human being to come to harm."
While extreme scenarios like this one would not be a day-to-day occurrence (I hope), lesser scenarios would arise where unfortunately, this dilemma would still occur. Although, let's give the First Law of Robotics the benefit of the doubt and divide it semantically.
Let's take the First Law of Robotics and splice into an 'either-or' statement:
1. Either, "A robot may not injure a human being, "
2. Or, "through inaction, allow a human being to come to harm."
This gives a robot a choice, therefore dodging the Trolley Problem scenario. Yet, this creates a new problem: In order for a robot to make a decision in this case, it would need some way of evaluating which side of the 'either-or' statement it should take, inevitably leading to some human assistance. This is where the Second Law of Robotics comes into play; this 'evaluation process' could be an order given to it by a person through the Second Law of Robotics that rather than 'conflicting' with The First Law of Robotics, makes it possible instead. Moreover, the Third Law of Robotics can work because of this 'either-or' relationship. Nonetheless, this is flawed, but for an entirely different reason.
The robot needs an internal understanding of what 'harm', 'injury', or 'humanity' is. Etymologically, these are difficult to define and span the entire range of Ethics and Moral Theory.
For these reasons, the 3 Laws of Robotics are more complicated than they seem, seemingly denying The Laws of Robotics' simplistic nature. Henceforth, they should not be used as a means of controlling robotic behavior.
6 comments:
I like how you used the trolley problem as an example. I never would've thought of that. You also make very good explanations within your blog about the 3 Laws of Robotics, which is good for your readers. Also, I like how you mentioned that robots not only need to understand what physical harm is to a human, but also psychological.
I really liked you post because it challenges some of the stances I took in the class symposium. Honestly it's making me rethink my views entirely. The computerphile video was a great explanation of why the laws wouldn't work. To us humans, since we know the general associations of 'human' and 'harm', we would think a robot could interpret that same information; however, it wouldn't be able to. It would be like explaining something to a child during their 'why' phase. The robot would be an entirely blank slate, and the video is right, we would have to first solve all our moral and ethical issues in order to be able to start coding the robot. This makes me question whether or not we should even make rules for our robots. Should they be held to the same standards as humans, the ones humans break everyday? I definitely agree with Dr. Johnson that we need to seriously think about our A.I. before we develop it even more and have something actually go wrong.
I really appreciated the chart that went along with your argument. It added a level of explanation and humor to your post.
I understand your stance that human emotion and morality will be difficult to program into a robot. I think it will be able to be achieved eventually though upgrades, but for that time period in between, the 3 Laws of Robotics could not be used for robots. I agree with Megan Morrison's comment that we need to consider about our own morality before we try to make mentally capable beings, but at the rate technology is progressing, we will probably be relying on those upgrades.
Thank you @Megan Morrison for your insight on making rules for future A.I. Your blank slate argument completely hit the nail on the head for this issue and was the primary reason why I disagreed with Isaac Asimov's Three Laws of Robotics to begin with. However, the Computerphile explained my suspicions better than I ever could! We definitely need to put more thought into technology as it continues to advance.
@Allison Sorette I'm in agreement with you about the possibility of Morality and Ethics being programmed into the robot... eventually. Even still, my layman thoughts on the topic are that perhaps Ethics and Moral Theory will never be complete? Does that mean a perfect robotic algorithm is impossible and forever subjective to the person that programmed it? Questions like these keep me up at night (not really).
This reminded me of a video of a human messing around with a robot to observe its actions, but someone dubbed a voice over to the robot to express his/her opinion, even though robots do not have emotions (yet). Very well done Ian, nice.
https://www.bing.com/videos/search?q=robot+human+box+voice+over&view=detail&mid=BB528DCEDEBAB1B5BBE1BB528DCEDEBAB1B5BBE1&FORM=VIRE
Post a Comment