I'm Stephanie.

Saturday, April 14, 2012

post v: Do androids dream of...

...writing papers about electric sheep?

I have been. Rather than actually writing the paper though, I've just been pondering the shit out of it. Let's look at this blog post as hopefully my last step in pondering before I just write the bastard.

How Philip K. Dick's name is not more well known is beyond me. Do Androids Dream of Electric Sheep? is one of the most artfully crafted dystopian novels that I can ever hope to encounter. While I am excited to write a paper about it, I am also more intimidated than I am with other texts because I feel like I owe this one something. Dick never, ever got away from the deepest of ontological questioning while still managing to incorporate synthetic cats and codpieces into the text. Win.

Essentially, the prevailing questions from the novel, or those that I gathered were:
- Is what we consider to be "human" replicable in androids? What do we consider to be "human"?
- Is "empathy" inherent to human nature, or is it merely a learnable construct designed to create solidarity within a given populace to assist in ensuring order, obedience, and consumption?
- Can androids feel empathy?
- If the android believes itself to be human, what is making it not human? If it is aware of, fears, and ponders death, it operates through language, and is biologically the same as a "real" human, is there still something making it not human? (then returning to the question, "what is human?")
- Is there nothing out of reach of becoming commodified? Even emotion itself?
- When the simulations become indecipherable from the "real," do they too become real? A new kind of real? But real nonetheless? Or was nothing real in the first place?
- Is there something lost when the lines between "human" and android become blurred?

I sound like a professor writing possible essay prompts within an assignment sheet. But I kind of am doing this. For myself.

I overwhelmingly got the feeling by the end of the novel that "human" is more of a construct than an inherent fact. We consider someone to no less human who is living with artificial organs or limbs that function and cooperate with the brain as the "originals" would have done. So if eventually we can build a human from the ground up, flesh and blood, with the capacity for love, sex, despair, hunger, thirst, and epistemological discoveries, what would separate these "artificial" humans from the real? I guess birth. There wouldn't really be birth in the same sense, buuuuut yeahhhh....

But basically, I don't think Dick would have written this novel if he didn't feel like there was something being lost in our ability to replicate even ourselves and our emotions, while pondering that we already do simulate many of our emotions anyway (and that television and television-culture, and also religion, and even just standard social conduct have a huge impact on this). However, by the end of it, I didn't really feel loss. I felt just as much "empathy" for many of the androids throughout as I did for the "humans," and I felt just pretty overall indifferent about my own inability to distinguish who was worth emotion and who wasn't. I felt really sad for the dying synthetic cat, and really wanted to take care of the artificial toad near the end of the novel, even though I knew it was an artificial toad. I just felt kind of submissively accepting of posthumanism. Perhaps if I weren't raised amongst artificiality with such intense familiarity, I would have felt more concerned for our potential up-and-coming cyborg reality, as the novel was written in 1968 when the television, advertising, and technology fusing with biology was really just beginning to bud and eventually explode...
Whoa. I still need to write a paper on this.

Suddenly though, I am reminded of Furby. When kids can't have a real pet for whatever reason, isn't the Furby, or the Tamagachi, or those weird baby dolls that actually pee and have a "realistic stool" a suitable replacement? Don't kids immediately latch onto these synthetic imitations of life and try to nurture them in whatever way they know how?
If technology advanced far enough to make these kinds of creations completely believable to children, and to adults, would the emotion behind that nurturing really change? If the Furby could "die"? If the baby doll could grow into an adult, become sexually active, fall in love, go to college, reproduce, age, and also die? What would really be at stake? Especially if no one - not even the synthetic being - was aware of its artifice?

I kind of hate to end with Furby.