lang="en-US">

They Have a Plan: The Philosophy of Consciousness in Battlestar Galactica - Overthinking It
Site icon Overthinking It

They Have a Plan: The Philosophy of Consciousness in Battlestar Galactica

[Hey all! Enjoy this guest post from Nathaniel Hanks! – Ed.]

Consciousness is both one of the most familiar things to all of us and one of the most mysterious.

Daniel Dennett

There’s hardly a better primer for consciousness studies than Battlestar Galactica. “What is consciousness?”, “How does the brain do it?”, “What do you need to have consciousness?” No one knows the answers. Philosophers have been puzzling over consciousness for centuries and, only recently, has science offered much help. The (2003) Battlestar Galactica catches viewers up on the philosophy of consciousness, the modern neuroscience of consciousness through Cylon technology, and the ethical implications of consciousness in non-human beings through the series’ Cylon/Human conflict.

If you haven’t seen BSG, this article will probably be a spoiler. Having watched the show isn’t necessary, since the series highlights real, ongoing philosophical and scientific questions of consciousness, but it may help detail some of the arguments.

Since consciousness will be the central topic, I want to make a clarification at the start. We’re not talking about ‘conscience’ in BSG but ‘consciousness’. John Searle describes consciousness this way:

“when you wake up in the morning you have it and you have it all day. And then it goes away when you go to sleep. And, sometimes, you have it in dreams.”

Consciousness is ‘what it’s like’ to be something. If there’s nothing that it’s like to be a tree, then a tree probably isn’t conscious. If, on the other hand, we reason that there is something it’s like to be a dog, then the dog probably is conscious.

But, knowing what it’s like to be something other than ourselves is problematic. People have to tell us how they’re feeling, or we have to imagine what they might be thinking, but we can never really glimpse their first-person, subjective experience except through our own (Daniel Dennett disagrees that another creature’s subjectivity is necessarily barred, but that’s an argument for the comments). This is the Problem of Other Minds and BSG introduces it with the first spoken lines of the series.

“Are you alive?” model-6 asks. “Prove it.”

In a torture scene, the Cylon model Leoben illustrates the problem for Kara Thrace (Starbuck) another way:

Starbuck: A smart Cylon would turn off the old pain software right about now.
Leoben: Maybe I’ll turn it off and you won’t even know.

“Flesh and Bone”

Subjective experiences are unique because they are not available to other subjects or, they cannot be objects. It’s a weird problem. For example, we could imagine two people looking at two colored cards. Amy sees card 1 as red, and Bob sees card 1 as green. With card 2 it is the opposite: Amy sees green and Bob sees red. Now when Amy and Bob are taught the names of colors, and say the teacher sees color like Amy, something interesting happens. The teacher holds up the first card (Amy-red/Bob-green) and says ‘Red’, with the second object she says ‘Green’ (Amy-green/Bob-red), and both Amy and Bob agree on names for two completely different experiences. Amy and Bob would consistently agree that a stop sign was ‘Red’ and that grass was ‘Green’ but each would have different subjective experiences for each. What’s more interesting is that Amy might like ‘Red’ and make fun of Bob because he likes ‘Green’, and they would really both enjoy the same experience of red. Neither could ever know the other’s difference in experience.

Philosophers call sensational experiences, like red and green, ‘qualia’. Qualia (the singular is ‘quale’) are those ineffably subjective moments of consciousness; the ‘what it’s like’ to taste an apple’s sweetness or feel a blanket’s softness.

Or a Cylon's vampiness.

Though qualia may be a sort of user-illusion, everyone knows what you’re talking about when you say that ice ‘feels cold’. Since we can’t know for sure we must take others on their word that they have subjective experience.

This is why Cylons are interesting. They’re not just apparent-human machines like the Terminator or mechanized humans like Robocop, Cylons are synthetic humans “like us [with] identical internal organs and lymphatic systems,” who also claim to have conscious experience.

Adama: How does that make you feel? If you can feel?
Leoben: Oh, I can feel more than you can conceive.

– “Pilot”

Cylons cry and get hungry. Some of them fall in love with humans and give birth. We may have to grant that Cylons are necessarily conscious because, as V.S. Ramachandran puts it, “there’s no such thing as free-floating qualia without a self experiencing them. Or, a self without qualia”. It seems that qualia necessarily entails an experiencing self and, if you have a self, you’re conscious.

But, couldn’t they just be pretending? Maybe there’s some advanced heuristic software figuring up context-appropriate emotional language. David Chalmers talks about this very possibility with the ‘Philosopher’s Zombie’ thought experiment.

The Philosopher’s Zombie isn’t like Romero’s crazed undead. In addition to looking perfectly human, the zombies go about their day in a human way. They have coffee and read books. They even write about the problems of consciousness for other zombies. But, the crucial difference, is that they have no first-person, subjective experience: they are not conscious.

Chalmers argues that such a creature could exist in theory but that here, on non-hypothetical Earth, we are conscious. Other philosophers, like Paul Churchland, disagree with Chalmers about Zombies. Just because a zombie is imaginable (says Churchland) doesn’t mean it’s possible. There is also the interesting suggestion that a Philosopher’s Zombie, like a Cylon, that could behave as if it were conscious would be conscious. A sort of Turing test for a self.

Anyway, the thought experiment is meant to highlight an odd point. If consciousness is only about nerve reflexes, preferencing, intelligence or decision making, then there is nothing that can’t be explained away with heuristic complexes or genetic algorithms running in the brain exactly like computer software. Chalmers puts it another way, “why is all that complex processing also accompanied by experience?”

In The Astonishing Hypothesis, Francis Crick expands the point with the problem-solving capabilities of bees. Bees are capable of finding their way to food based on color cues and can learn to solve problems more quickly by remembering past color cues which led to food. Crick notes that if a potentially comatose patient could perform any of the tasks done by the bee, we would not hesitate to declare the patient conscious. But then is the bee also conscious? If not, why? More to the point, computers are outperforming their conscious designers all the time but, we never assume they’re conscious.

Since we can’t somehow look and see consciousness, we have to either accept the Cylon’s claims to a subjective self, because they describe qualia, or re-evaluate how we determine the existence of consciousness in others. If this isn’t problematic enough, it may be that the act of questioning consciousness is more difficult. After all, it’s dizzying to imagine how the self could find itself.

[Is there anything else in “Battlestar Galactica” that suggests Cylons are conscious? Or that proves they aren’t? Sound off in the comments! – Ed.]

When he’s not watching Battlestar Galactica or playing Call of Duty: Modern Warfare 2, Nathaniel Hanks writes about consciousness – particularly the mind-body problem – and evolution at his own blog. Stay logged on for further Overthinking It posts about Battlestar Galactica and the philosophy of consciousness.

Exit mobile version