In
“Be Right Back,” the Black Mirror
television series examines the future possibilities of robots with the metal
capacities of a human being. Martha
loses her husband Ash and, through the course of the episode, replaces him with
a varying degrees of computer intelligence with access to Ash’s social media accounts. When she recognizes that no one can replace
her husband, the scene fast forwards to approximately ten years later when robot
Ash (Ash x) is residing in the attic.
The audience assumes that Martha has not moved on, not having the
courage to get rid of her old husband whom Ash x represents.
This
robot in the attic however is most likely not Ash 1. Ash 1 was very expensive purchase, leading us
to assume he was in the prototype phase.
We may also assume that he was made of some new biotech material. Ash 1 will have then assumingly malfunctioned
or encountered an error before the ten-year period. Let’s say Ash 1’s left arm stopped working at
the two-year point. Just like a human
would go to the hospital to fix a broken arm, Ash 1 could go to a mechanic or “robot
doctor” and get his arm fixed. Let’s say
at the eleven year point, Ash 1’s body begins to deteriorate. He is unfixable, only created to live so
long. Ash 1 as a robot intelligence has
an option at a longer life however. His “brain”
can be transferred to a new machine, rebooting the Ash 1 that always was. If Ash 1 with his intelligence and reasoning
capabilities is taken to be a human being, is the human race playing the role
of God in making a new human? Would it
be wrong to kill Ash 1, knowing that this option of life is available to him
and leaves him as the same person? In
answer to this conundrum, I suggest that Ash 1 should be treated as a human. Whether Ash 1 wishes to be fixed and live on
or chooses not to be upgraded for whatever reason, it should be a choice his
mental capacities make as would be expected with any human.
Backing
up, the technology that created Ash may face upgrades just like an iPhone would. What if instead of malfunctioning/dying, Ash
1 was continually upgraded. With each
upgrade, Ash 1becomes more and more human, the gaps that made Martha see him as
nonhuman slowly closing. Ash 1 will
never become Ash the husband, but it is probable that he will develop into his
own person. Ash 10 will not be the same
as Ash 1, just like the original iPhone would be extremely different than a newer version (see picture below). Maybe the reason why Martha decides to keep
the robot is that Ash 10 has become his own capable being, respected as such
and allowed to live in the house like an uncle.
Martha did not destroy him or let him malfunction, but saw him develop into
his own unique person. It is only
appearance and the sad memories of Ash the husband that keep him confined to
attic, not Martha’s inability to get rid of him or his creepy “robotness.”
Original iPhone (Right) Versus iPhone 6 Plus
http://s1.ibtimes.com/sites/www.ibtimes.com/files/styles/lg/public/2014/09/22/iphone-v-iphone-6-plus.jpg
|
4 comments:
I think, to further the points you made in your post, that robot development like what you describe might be the stage of "post-human" existence we discussed in class. There reaches a point where a robot-based-on-a-human will run out of data about the human foil from which to create new thoughts or actions. That "Ash x" is around 10 years from Ash's death does raise the question about whether Ash x's interactions with the daughter can be legitimate to how Ash would act. There is no indication that Ash and Martha wanted children or whether that would be something Ash talked about on social media. What if Ash did not want to be father, would this affect how Ash x acts towards the daughter? Does Ash x function on a best practices approach to parenting, where being an unengaged pseudo-parent could be considered "harmful" to the daughter and thus violating the first law of robotics?
Going back to the development discussion, I can reasonably see this program being a means of post-human reproduction. Imagine if Ash had died as a five-year old child, but "his" robot lived on to be 25 years old (and beyond). Putting aside the issue of a five-year old having a comprehensive social media profile, if the robot has lived longer than the human foil I would agree that it has become it's own person. Just like how we as humans develop differently based on our environments and our experiences, robots "based" off Ash would have different experiences and thus develop into distinct post-Ash people. So - could we have a certain number of robot programs (that were at one point based on people) that are raised to develop their own distinct personalities, thus creating post-human humans?
This post and comment reminded me of the movie Robots. In that movie the robots could get "upgrades" to improve their capabilities/appearances. These upgrades were used as status symbols. Therefore, robots getting upgrades warranted either admiration in a popularity sense, or disproval from friends for "changing".
This doesn't really follow the lead of the previous comment, but I think it is interesting to think about how people would use their robots as status symbols. Similar to how people show of their iPhones, people could show off their robots and how human like they are.
Ironically, by bragging about how human-like their robot is, the chance of treating them like humans is erased. Parading it around like one would an iPhone is dehumanizing. This is a funny thing to try and explain because the robot isn't human in the first place for it to be dehumanized, but I feel like it does contribute to the idea of moral obligation we would have to robots. To truly treat a robot as we do other humans, we would not be able to purchase and own them. Otherwise, there would be nothing stopping us from showing them off as a simple symbol of our own popularity/wealth.
@Brigid Lockard-
I believe that Ash X would not have the information on whether Ash wanted a child or not from social media accounts. (I think Ash X was programmed based on an assumption that Ash wanted a child. He was really excited to hear the heart beat of the baby when talking to Martha on the phone.) If Ash did somewhere mention that he did not want a child in some way, I would think this would make the robot colder and more aloof to the child at first. As life experience changed him, he would either learn that the child was good or stay to the lines of his program if he decided the child was "bad." If programmed to see the child as good, Ash X could then be a very effective parent, although, like you said, Ash X will never be Ash.
I like your last thought on one computer program creating a slew of robots that initially are alike but slowly differentiate with their different experiences. I believe (following your reasoning and my blog above) that this would be the case for post-human humans.
@Kelsey Morrisson-
I have never seen the movie Robots but the premise sounds like realistically what would happen. If humanoid robots are anything like iPhones or other tech, people will want to show off with the latest realistic upgrades like you said. Robots will unfortunately then be stuck in the category of objects instead of people. Maybe the movement of robot rights would be like the women's rights movement or any other movement. History might repeat itself.
Post a Comment