Essay: To Be or Not To Be
Diana DeRiggs

Ever since men and women conceived of non-living things which could emulate some of the things men or women do, there has been tension about where or when these non-living thing could be considered human. Of course, this is not the province of non-living entity's rights, for living things are also — sometimes controversially, but usually less so than for machines — brought under consideration of this question as well. We anthromorphize dogs, horses, cats, etc. and many treat these pet animals as humans; in American history, the rights of animals were considered and codified long before the rights of human children. Slaves are a clear example of living being who are considered not-human, yet they possess exactly the same mechanical, spiritual, emotional and intellectual parts as the master. Their non-membership in the human race is based on politics and human nature, but in fact they are actually and undeniably human. And so, can it be argued that "manufactured" entities cannot be considered human, despite the fact that they may seem indistinguisable to humans in many ways?

This essay was born in a sea of information triggered by our pasts, our observations, our environments. We often talk about how an "idea took shape" or how "an idea was brought to life." How information is "born" in a manner analogous to how "life" comes about from a "primordial soup" of organic and inorganic compounds, none of them "living." More specificially, reviews of Ghost in the Shell, Ghost in the Shell: Stand Alone Complex, and Ghost in the Shell 2: Innocence started churning to birth this essay topic: though life can be emulated by machines, but can that simulation become life itself?

The Ghost in the Shell movies investigate the essence of humanness — when a society can swap flesh-and-blood body parts and vital organs for cybernetics and computer-based programs to improve life and all its components, when does it go "too far"? Is there a point where a cyborg is no longer human?

Dr. Isaac Asimov is famous for having written such books as I, Robot; Bicentennial Man; Foundation, and many other tales which have inspired everything from movies to cult worship. In these stories, a robot is a robot, and is subservient to man (at least at the beginning). It exists for man's pleasure, for good or for evil, but in itself a robot is not evil or good. All robots are programmed with a set of non-negotiable "rules of robotics" which cannot be side-stepped; boiled down, there is one irrevocable rule: a robot cannot hurt a human through action or inaction. It must obey humans, as long as no humans come to harm in obeying. It must preserve itself, but not at the expense of hurting a human. But how does one define a being as "human"? If a man replaces his body with prosthetics for health or as an improvement of skills, and if he also replaces his brain for one which functions as a computer complete with external memory ... what makes that man distinguishable from the servant-robot? This is investigated often because for a robot to follow the rules of robotics, it must know the difference between a human and something that is not human. Likewise, a human needs to know when they are commanding a robot.

To illustrate how hard it is to define what IS human, the movie verson of I, Robot brought up an interesting answer in the form of a question to an unthinking statement of the obvious: When Del Spooner tells the robot Sonny why humans and robots are different: "Robots can't write symphonies." Sonny replies, "Can you?" Indeed, if a human cannot write a symphony, is he then not human? Is that what a soul is? In this case, the robot knew better than the human that humanity is undefinable, yet it exists.

The answer has always been "a soul" or as in the case of Ghost in the Shell, the "ghost." In Ghost in the Shell, when an entity seems to behave in a non-robotic manner, they "check for a ghost." We encounter characters which look human, but are actually androids, and others which look like gray metal boxes, but they are actually humans who have chosen to take on physical forms which they consider "superior" to their human forms. For the character Kim, he goes so far as to occupy a puppet-like body which resembles a parody of a decaying corpse, and believes he has transcended humanity.

It may be possible to copy a human soul, but since a soul is so difficult to define (though apparently not difficult to detect, at least in these stories) how can one tell if it's copied? Digital media is purported not to degrade when a copy is made, though it does tend to leave artifacts of the process, like software signatures, or blips and lags as remnants of the recording process. "Fingerprints" of the dubbing process can be detected, like the revision or rearrangement of functional groups in the structure of an organic drug.

But what if the soul is not copied, shows no artifacts or remnants? What if it is spontaneously created as the result of the robot's thought processes? What if the robot becomes "aware" of itself and can develop complex emotional, ethical, and philosophical thoughts? Is that "ghost" different from an actual human ghost? (To be honest, many humans seem incapable of logic, emotion, ethics, etc. I wonder if these humans can be legally denied their humanness?)

In Bicentennial Man, a servant-robot achieves sentience — a self-awareness, which is generally accepted and defined as a minimal requirement toward humanness. He does not understand why humans will not accept him as a human. He works, makes money to support himself and others, he cares for others, they care for him. The daughter of his owner, "Little Miss," decides he is human enough for her to name him, in the manner humans do, and he becomes "Andrew." He wears clothing, and can soon afford a skin to make him appear more human. He helps to enact legislation so that humans may accept prosthetics and cybernetic enhancements and can be assured that they will never be anything but human for these "changes" to their physical bodies. The law will protect their humanity, and Andrew tries to draw the corollary that a cybernetic being with a consciousness can also be human, even though he was not born of human genes. He tries to erase the differences between himself and humans, so that they cannot use their logic to exclude him from humanity, but to no avail. He is repeatedly denied the designation which will recognize him as "human."

Ultimately, as he approaches his 200th "birthday," Andrew realizes that having survived so long, what humans detest most about robots is their immortality and invulnerability. Though robots could "die" by wearing down or through neglect, they can also "come back to life" through repair and repowering. Death is what humans fear and respect, and they cannot permit a being who can reverse death or simply not die at all, into their exclusive club.

This brings up another question: if given the choice, would dolls and androids even want to become human? It's part of human arrogance to assume they do. It is obvious that humans fear Andrew becoming designated as a human for a variety of complex reasons, including their fear that they, themselves, might someday be designated at "robots." Is it a flaw of human law to be so flexible and reversible? Is that what makes us human, and robots "logical"?

Why, for that matter, do we even want to create robots with human forms? It has been pointed out that androids made to look human are not as efficient, and are severely limited by the limits of our body forms and motions. We seem to have a need to create something in our image; in reality, having another human around to serve us is not necessarily comforting. It is much harder to discipline, reprogram, or destroy a machine that looks like a mirror image of us. It's creepy and weird. Why do robot have to be bipedal, with two arms, a voice, eyes, etc.?

In the I, Robot set of written stories, the robots learn to constantly redefine "human" ... and the definition ends up including themselves. In Stand Alone Complex, the tachikoma battle tanks are given AI (artificial intelligence) programs so that they may be flexible enough to learn and to think on their own, and thus better help those they serve. They have arachnidae-like forms, but that doesn't prevent them from taking on human traits, nor does it prevent humans from caring about them.

In Attack of the Clones, Kaminoans produce organic computers in the form of human clones with modified genetic structures because clones can think creatively, like the tachikoma AIs. But if they can learn and grow from the data they process, might the tachikomas achieve sentience someday? The spider-like battle tanks do not look human, yet they are embraced by them. And if the human clones are unthinking drones, are they actually human at all? Like robots, they have interchangeable parts — the destroyed liver of one can be taken from a freshly dying clone, blood can be transfused without processing, and since there are so many of them and they are absolutely identical, products are made to cater specificially to them and their particular problems, and to no one or nothing else.

The tachikomas do start to show signs that they may possess ghosts. They start to wonder if they are denied the knowledge and love of God because they cannot die ... they themselves start to reason that humans fear them because they are becoming too much like the humans; they even try to fake-out their boss by behaving more like robots to avoid being decommissioned. They start to fear — yet crave — death, and start to disobey orders to investigate things that intrigue them. How far will that go? A little-known children's book called Runaway Robot described a robot imitating insane behavior so that its master could have time to run away; in fact, the robot named "Robbie" had achieved sentience and was on the lam. It came out of hiding and had put itself into a line of danger that he knew would result in his own termination.

The ability to sacrifice yourself is no sacrifice when it is simply part of your programming, as in the Asimov rules of robotics. Knowing — with regret or reluctance — that you will die and doing so for the benefit of others in a conscious manner is quite outside the rules of robotics. Mara Jade tells Luke Skywalker in Vision of the Future that she realizes she cannot become a Jedi Knight until she learns to sacrifice for the greater good, to put her own life and her own things up for trade. Tossing things out without thought won't do it — it has to be done with a complete and whole understanding of the sacrifice. Her ship Jade's Fire represents everything she has and everything she felt was vital to her post-Imperial existance. She could give up a human child more easily thatn she could this ship. But when she sends it into the transport bay in the fortress called the Hand of Thrawn to prevent the Chiss from being able to launch transports, her connection to the Force strengthens and she feels the Force as never before.

George Lucas has said that part of Anakin Skywalkers's problems in dealing with himself and with the galaxy had to do with losing his body. The character was largely cybernetic by the time we first see him in A New Hope; Obiwan Kenobi tells Luke that Darth Vader was "more machine than man." This was meant to indicate the unconscious loss of Anakin's soul; Anakin couldn't deal with life and those around him as a human being because he was less and less human over time, but he perhaps did not realize what was happening to him. He told Padmé before they were married that he prefers dealing with machines because they could be analyzed and repaired; possibly, he found his cybernetic arm an improvement over the flesh and blood arm he'd lost when he fought Count Dooku. It's a form of arrogance and hubris. In contrast, Ton Phanan of Wraith Squadron felt acutely the draining of his soul. He felt that others could palpably see that as he lost more body parts, he lost more and more of his future, and a man with no future is no man at all. He died in book two of the Wraith Squadron series, Iron Fist, in a self-sacrificing and almost elegant manner. Perhaps he felt that by dying this way, he retained or recaptured his humanity in death. This is a common idea even among children — if you lose something in life, in heaven it will be restored to you. By dying, he did indeed lose his future, but he kept his soul intact.

Do you remember when Anakin Skywalker came back to the Lars homestead bearing his mother's dead body? He cried to Padmé that someday, "I will even learn to stop people from dying!" He desired complete control over others, even if he didn't recognize he couldn't even control himself; like all teens and youths, he was decrying his apparent inability to control anything. Ultimately, becoming a cyborg is a way to keep a person from dying, but as Andrew discovered in Bicentennial Man, immortality robs you of the essence of humanness. It seems that attaining immortality means you cannot have a soul. Anakin learned to "beat death" but the price was higher than he could have imagined ... and yet he seemed willing to pay it.

Tom Riddle of the Harry Potter world, a.k.a. Voldemort, also desired immortality. He would undergo the ignominy of being a parasite on a disciple's body, and would craft a new body through sorcery — akin to getting cybernetic parts to replace the old ones. By the end of Goblet of Fire, Voldemort has a new body, filled with his own evil soul — he had achieved a way to be immortal. In Sixth Day, a device which takes a sort of memory-imprint behaves as an external record of a person's memories and consciousness. If a body is diseased or is killed, a new one is created from a "blank" — thus none of the bad guys ends up valuing the body they have or the human life it represents. It seems that part of humanity is being forced to accept the body's fallability and failures, as well as accepting death as the final rest from which there is no return. Those who do not are fated to lessen or lose that humanity.

If this is the case, who would choose to lose their body? The answer is: just about everyone. Imagine a world which allows a person to be more than they can be, more than the sum total of the gifts they were born with or could develop over time and effort. Everyone believes that what defines a living entity is not about the body, which is the part which everyone can easily see and by which humans tend to judge one another. A perfect body becomes a blank canvas: imperfections which are reviled in our current world suddenly become precious. A politician might prefer to look old, wrinkled, warty so that he looks more like a politician ... a middle-aged man might emphasize his hair loss and fat belly because it makes him different ... a woman might no longer care about having perfectly tweezed eyebrows because only androids have perfect features, and why would you want to be mistaken for an android? And when even your organic brain is replaced — often by choice, rather than choosing to be ordinary — by an electronic device so you can think faster, harder and better ... what is the only thing left about you that is human, other than your personal imperfections?

And yet we, as humans, do not value imperfections or differences even if they ultimately lead to success. Variations and mutations are important, as is pointed out to the one non-cybernetic member of the Section 9 team in Ghost in the Shell. Major Kusanagi points out to Togusa that he was selected to join the elite team precisely because he was different — he had a wife and child, he had an organic brain, he had a fondness for antique guns over modern high-powered, high-caliber blaster-like weapons. When everyone is the same, a weakness is inevitable. The Clone Wars comics story line took advantage of this fact, when the cybernetic Durge caused a virus to infect only the clones; their sameness made such a thing inevitable.

In olden days, "immortality" had to do with children — biblical stories tell of the need and desire of humans to have children so that they could live on through their "line," as if a piece of your soul would stay on earth, and you would not be forgotten. George Lucas has stated that Anakin Skywalker was saved from the Dark Side simply because he had had children; suddenly, far from having nothing to lose, he had everything to lose. That gave him empathy for another being and a desire to not destroy something — in fact, he had a reason to preserve things. After thinking he had killed Padmé and their unborn child and duelling Obiwan, Anakin would have felt he had nothing to lose, and threw himself completely to the Dark Side. Realizing he had a living child in the galaxy was the turning point of his life.

Scientist and author Richard Dawkins coined the term "meme," which is a information unit analog to a gene in biology. It is a piece of information which can be replicated and becomes integral to the entity which uses it or possesses it. Just like a gene will determine your eye color, a meme will define output and replication of information. In simple terms, "meme transfer" is about copying the information then conveying it to others via voice, books, visuals, etc. It's how culture grows and evolves and how ideas can be conveyed. And like genes, a meme will vary upon transfer rather than duplicating exactly. This mutation can follow random processes, producing variations and new concepts in a potentially endless array.

But memes do not really have infinite potential. Poet Ralph Waldo Emerson pointed out that a man's existence can only be defined through his thoughts, and that a man can only be reformed by showing him a new thought which then commands the man's existing and future thoughts. This is the essence of the mutated meme — how the mutation occurs depends on the individual human and thus has limitations — some predictable, some totally random-seeming.

Do memes come about as the result of processes analogous to how biological life is thought to start? If so, where is the primordial sea where memes — the evolutionary germ of thought — are created and propagated? If this is true, doesn't that mean if your mind — or soul — is transferred from your body and brain to another body and brain, that soul must change because of the transfer? This is noted in both Ghost in the Shell where degradation of the "ghost" can be observed when "ghost dubbing" occurs, as well as in Attack of the Clones. In the Star Wars universe, cloning from the original host is necessary. After a while, the template degrades, and a new imprint must be taken from the host. Thus they keep Jango Fett on Kamino at high cost.

However, in the cloning case, they are not talking about a person's consciousness moving from one body to another, but something akin to making simple copies with each body living a separate and independent life. The two exceptions to this in the Star Wars expanded universe were the transfer of the Emperor from one body ravaged by use of the dark side of the Force to the body of a clone, and when Death Star engineer Bevel Lemelisk was punished by being executed. Lemelisk would "wake up" in a new body and see his old body being tortured, but his mind and soul now lived in a clone body, thus such a "consciousness transfer" must have been possible.

And yet the memory must be changed somewhat every time a new body is occupied, and thus the soul or the person is changed regardless of the thoroughness or the "cleanliness" of the process. This is true even surficially, when the ability to transfer yourself from body to body becomes part of your reality. If this "new" incarnation of the person is not the same as the one that came before it, is the new person the same as the old one? What makes the new person different from a robot which had been manufactured? Should that new person — the copy — have any rights as a human? Or be treated as a "made object" like any other thing?

This essay has turned out longer than I'd intended, simply because there isn't a clear answer. We have emotional "of course!" types of thoughts in our heads because we haven't thought everything through. As humans, we can choose to ignore ideas, but that doesn't mean they don't exist. Human history and literature is full of stories of rebellion, even when the consequences of that rebellion is death or extreme suffering. Those types of stories resonate for all humans because that's a common part of our programming.

We don't necessarily rebel or do things for any greater good — teenagers inevitably do things for extremely selfish reasons, and many humans never grow out of this phase of life. This is why odd, illogical things happen: some middle aged men buy cars and motorcycles they have dreamed about since they were 16, and take girlfriends and wives who look 16. Some women truly believe their child is superior to a neighbor's child, overlooking the fact that in order to get good grades, that same child might be bullying other kids for their homework. Self-delusion is a very human trait.

The limitations of human thought is based on the limits of our brains and atomic/cell structures. Humans are programmable, though inputting the information is not as straightforward as normal dataprocessing. For instance, human cultures seem to evolve in terms of "outside forces" — in general, humans believe in a god or a something that will have created humans and everything on this earth and beyond. We evolve through agrarian-style gods, to a polytheistic model, to a monotheistic non-flawed god whom we eventually accept we cannot understand. Concepts of "good" and "evil" are very similar from culture to culture, even though there is no evidence that these different cultures had any contact in the past. All of this seems to point to limitations in our "programming" as humans. Any deviations, mutations, or variations against our programming tends to have dire consequences for those who display their differences.

In essence, humans in this light have no differences between themselves and a robot; they may be simpler versions of ourselves, just as many religions accept that humans are simpler versions of a god. Is it, in fact, important to consider them different from us? Or is that just another example of human vanity? Is "to be" a good enough reason to accept robots as robots, and humans as humans?

The comedy machinema Red vs. Blue has a human undergo surgery to become a cyborg, because robots cannot be trusted. They can be reprogrammed or affected to work for the enemy quite easily, so the commander of the Red squad orders his favorite soldier to become a cyborg. This way, all the advantages of a robot could be had without the loyalty and overt reprogramming issues. Simmons's human body parts, in turn, are given to Pvt. Grif, who had been badly injured in an explosion. Even though Grif now has Simmons's body, there is never any doubt that he is Grif, to the point that he would blame Simmons's body parts for mishaps ("Don't blame me, it's YOUR stupid cerebral cortex!"), and Simmons would berate Grif for smoking and eating junkfood ("You are not damaging MY body like you did to yours!"). The ghost (as in coming back from the dead) of Church occupies the robot body of Lopez, and finds it a lot better than his old human body. There is no doubt to these simpletons that the "body" is not the seat of a person's essence — if you think too hard, anything is a problem.

It is no accident that as a human child slept in the body of a runaway, apparently sentient tachikoma in Ghost in the Shell, the words "To Be Or Not To Be" flashed on and off on the screen conveying the tachikoma's confused thoughts. One wonders if the tachikoma is actually worse off realizing this "next step" in its consciousness. The child-like tachikomas, human children, and dumb soldiers might have the same succinct conclusion to that question, "They just are!" And perhaps that's the best answer for most people.

Disclaimer: Opinions expressed are the author's own, and no profit or lucre is expected, solicited, advocated or paid. This is all just for fun. Any comments, please e-mail the author or WOOKIEEhut directly. Flames will be ignored. Characters and situations are the property of LucasFilms Ltd., Bantam Publishing, Random House, and their respective original owners and developers. This essay / editorial may not be posted anywhere without the author's knowledge, consent, and permission.