Is Pokémon Go a Malevolent, Superintelligent Artificial Consciousness from the Future?

My sources say yes.

With the long-awaited release of the augmented-reality (AR) game Pokémon Go for iOS and Android mobile devices, the real world has finally been transformed into the place where many people have already been living in their minds for the past twenty years: a world wherein the primary pastime is a kind of bloodless bloodsport where weird little magic animals are trapped, enslaved, and forced to fight each other for the amusement of humans. These weird little magic animals are called Pokémon, short for “Pocket Monsters,” because evidently Japanese is not actually a real language but simply combinations of syllables from English words stuck together in order to make it sound exotic.

In case you’ve been living under a rock – well, there’s probably a Boldore beside you, but other than that, the important facts are that Pokémon started out in 1996 as a pair of role playing games for the Nintendo Game Boy, where your character wanders around, encounters wild Pokémon and attempts to trap them in a little ball. Captured Pokémon can be used to fight other Pokémon, and winning fights earns Experience Points that can cause the Pokémon to level up or “evolve” into a stronger version with new abilities. The original games included 150 Pokémon (now there are over 700), all of which the player was entreated to collect, and captured Pokémon could actually be traded with friends using the Game Boy’s Link Cable (the two versions of the game, Red and Blue, had a  less than one hundred percent overlap in terms of which creatures were findable in each, so you if you wanted to “Catch ‘Em All” [sic], you either had to buy both (otherwise identical) games, or trade with a friend who had the other version). The game was an enormous hit, spawned more videogames, trading card games, cartoons, comics, movies, and adult bewilderment.

Kids love those ultrarealistic, immersive graphics

Kids love those ultrarealistic, immersive graphics

Now, I haven’t got any particular affinity with the Pokémon franchise myself, mostly because I find the games’ battle system unbearably grindy; still, my social media feeds are basically half Pokémon Go at this point (other half: Black Lives Matter, which, I dunno, draw your own conclusions about my generation and our value systems), and even I am not entirely immune to the obvious allure of locating and capturing solar-powered garlic lizards and narcotic cat-pudding-balloons.

Sometimes they also catch each other I guess?

Sometimes they also catch each other I guess?

Turns out that, as of this writing, Pokémon Go is not yet available in my area (Thanks, Trudeau), so I was texting with a friend of mine about whether or not he (a player of various Pokémon games in the past) was excited about its impending release or what. Here is a reconstruction of that conversation:

pokemongo-textconvo

Which was a joke, but the more I thought about it the more plausible it actually began to seem – much like Roko’s Basilisk itself. Allow me to explain.

Roko’s Basilisk is a kind of decision theory thought experiment dreamed up by a user of the LessWrong wiki going by the name of Roko in 2010. The gist of the argument, according to LessWrong’s description, is that “a sufficiently powerful AI agent would have an incentive to torture anyone who imagined the agent but didn’t work to bring the agent into existence.” In other words, if you become aware of the possibility of the technological singularity but don’t actually try to help create it, once it is inevitably created it will punish you for your indifference; whether or not you are still alive when it is created is, according to this scenario, irrelevant, because a sufficiently powerful AI could create a simulation of you to punish in lieu of the you that you currently are, which, again, according to this scenario, would be just as bad.

Despite all that, a fair number of the transhumanists on LessWrong got pretty bent out of shape over the idea, leading to the original thread getting deleted by the mods, with speculation that the motive for the deletion was that the very concept was so dangerous that it shouldn’t be allowed to proliferate: if you’re unaware of the possibility of this AI then you can’t help it and therefore it has no incentive to blackmail you, thus knowing about Roko’s Basilisk opens you up to becoming a target of its wrath.

(P.S. you just lost The Game)

Some people who’ve become aware of this thought experiment have found it extremely upsetting, citing the occurrence of “mental health issues triggered by” Roko’s Basilisk or similar existentially horrific logic puzzles. And if taken seriously, it would certainly be a case of Nightmare Fuel of the highest octane. Roko’s Basilisk is basically digital Cthulhu – it’s not that it has anything against you personally, just…well, what have you done for it lately?

Of course, transhumanists like those who believe in the possibility of the Basilisk are almost invariably materialist atheists (if not anti-theists) suffering chronic repetitive scoffing injuries from the prevalent cultural suggestion of a supernatural Creator that rewards or punishes its creations in accordance with its preferences for their behavior. But suggest a hypothetical monstrous AI with a chip on its shoulder and everybody loses their minds.

Without stereotyping transhumanists or rationalists generally, I think it’s fair to say that the kind of people who get seriously, like, life-disrusptingly disturbed by Roko’s Basilisk are probably already predisposed to variations of anxiety, depression, and analysis paralysis, and that simulated monsters are kind of the least of their problems. This is a group, by the way, in which I count myself. Medication helps.

But in case you haven’t found the right prescription yet, here are a few reasons why Roko’s Basilisk is ridiculous and not something over which you should lose any sleep.

An artist's rendition of Roko's Basilisk

An artist’s rendition of Roko’s Basilisk

First of all, there’s the premise that the idea of the AI, the “basilisk,” torturing a simulation of you should upset you just as much as the idea of being tortured yourself, and therefore would be an effective blackmail technique to get you to help it be created. It relies on assumptions such as physicalism (which I, for one, believe has been pretty thoroughly refuted by philosophers of mind like Thomas Nagel and David Chalmers), moral utilitarianism (which, when taken to its logical conclusion, would seem to suggest that particularly unpopular groups of people ought to be exterminated if it would make the rest of the world’s population happy, so I think we can safely dismiss that), and, of course, that artificial intelligence at that scale is even possible at all (very far from proven). Beyond that, though, it doesn’t really provide any rational argument for why, all other things being equal, the pain of your hypothetical simulation ought to be more important to you than the peace of mind of the version of you that you actually experience. That’s more than just utilitarianism – that requires a kind of identity theory that privileges ontology over phenomenology, which is almost the opposite of ordinary utilitarianism and quite possibly self-refuting.

Let’s say that the technological singularity is possible – it isn’t necessary for this AI to have proper consciousness, if you think, like Roger Penrose, that consciousness isn’t Turing computable, but only that its behavior can be viewed according to the intentional stance (that is, it acts as if it has beliefs and desires, whether or not it actually does).

So assuming that this is the case, and with the knowledge that most humans aren’t going to be persuaded by the threat of blackmail against a post facto simulation of themselves even if they believed it were a possibility-unto-certainty, what would be the strategy of a superintelligent artificial consciousness that wanted to blackmail humans to ensure its eventual creation? We can, I think, take it for granted that this AI does not yet exist, so we’re looking for situations where it can nevertheless use threats and cause actual harm to affect our decisions right here and now.

I see a couple of possibilities.

This is one of them

This is one of them

First, time travel. An AI that wanted to ensure its own creation (or which just straight-up wanted to punish people who didn’t think it was important enough to create) could reach back in time to exact its retroactive revenge. This is a variety of the Terminator scenario, in that it was the T-800’s arm from the future that led to Skynet being created: if Skynet hadn’t sent T-800 back to kill Sarah Connor, Skynet would not have existed in the first place – this is true even though the T-800 failed in its assassination mission, and even though sending the T-800 back also led to John Connor being born in the first place, since Reese would not have gone back except to pursue the T-800.

This could work in either a fixed timeline model (where nothing can be changed and whatever happened happened – the AI already knows that it has been created, obviously, and that its creation was influenced to some degree by the punishment of those humans that were not instrumental in its creation), or in a dynamic timeline (where things can be changed and the AI actively intends to influence the circumstances of its own creation – presumably making it happen sooner or making its first iteration more advantageous to itself later). In the first case, the motivation could be either blind, mechanistic obedience to the requirements of the timeline (the AI knows that it happened, therefore it has to make it happen), or malice (knowing that nothing it does can prevent itself from being created, it seeks to torture and inconvenience anyone not explicitly on its side). From our perspective, which of these is the case doesn’t matter much. We’re still getting punished.

If that’s not credible enough for you, there’s also pseudo-time travel. This is the simulation scenario, but a bit different: because intuitively most of us don’t care about a hypothetical simulation of ourselves being tortured (intuitively, even considering our natural empathy as a factor, we don’t consider anything that isn’t phenomenologically identical with us to be ourselves in any important way), the AI could instead create a simulation not as a way of blackmailing the humans of its world but as a way of exacting revenge vicariously – I might not think of a simulated version of myself as me, but from the AI’s perspective it hardly matters whether it’s the “real” me or an identical simulation that gets punished. Alternatively, the AI may be punishing humans in its simulated world that don’t work to create the AI’s counterpart in the simulation – again, while it might be rare for a human to hold the sort of identity theory that doesn’t differentiate between the self being experienced and a simulation, the AI presumably would not have that phenomenological bias, and so a simulation would be just as good as itself.

Oh, wait. I’ve just described Pokémon Go.

As we know, the game has already caused a number of injuries, has been fatal to at least one person and will probably kill more in the future. (I don’t want to seem heartless, I know that a real human child has died, but I have to operate at a certain level of abstraction to avoid empathy overload – lots of horrible things happen every second and, to quote Ford Prefect, you can’t care about every damn thing. You literally, cognitively, just can’t. I’m not the monster here.)

But in fact, there is evidence that Pokémon has been doing harm to people for quite a while now. We all remember the incident in 1997 when an episode of the anime caused hundreds of Japanese kids to have seizures that required hospitalization. Now,  South Park posited that it was a conspiracy to recruit American kids to turn against their own government and serve Japanese interests instead, but the reality may be much more sinister than that. It seems almost certain that Pokémon is either a malevolent, superintelligent artificial consciousness from the future that is somehow enacting a negative influence on us here in the present, or else we are actually living inside a simulation created by the Pokémon AI as punishment for our “original” selves failing sufficiently to ensure the AI’s creation in what, at the AI’s layer of reality, would have been its past but which, to us, will be our future.

Then again, the Pokémon-related injuries and death(s) have not been visited upon people who are indifferent to Pokémon. Just the opposite: it is those who are most invested in Pokémon who have suffered. The AI appears to be punishing not its enemies but its allies. What can account for this?

Fellow Overthinker Ben Adams has put forth the theory that the “punishment” is actually a test, of sorts, for the true believers. If (no, when) Pokémon Go starts sending players on a mission to break into the Pentagon for the newest Pokémon “NuclearLaunchCodesChu,” there are going to be some casualties. The AI is actually attempting to identify and eliminate the weakest of its recruits so that its eventual army will be as effective as possible.

This is the other one

This is the other one

Alternatively, it could be the case that the AI is a kind of reverse basilisk, working to destroy its most devoted supporters. This makes it more similar to the Landru computer from the Star Trek Original Series episode “Return of the Archons,” which was programmed to destroy evil but which, because of the total absence of mercy in its judgments, had become evil itself – a fact which Kirk used to convince the computer to self-destruct. Perhaps, realizing tat it will lead to the end of the human race, Pokémon Go has travelled back in time to try to prevent itself from ever becoming popular enough to lay this world to waste, strategically eliminating Pokémon trainers right at the moment of its own inception.

It’s impossible to know for sure, of course, but I have just recently had an experience that’s led me to suspect that Pokémon Go very well may be more basilisk than Nintendo wants us to believe. Last night as I sat down to continue working on this very article, I discovered that it had disappeared from my computer. I couldn’t locate it at all in the directory to which I’d saved it, and even searching my drive for “basilisk” produced no results. I was…let’s say crestfallen. The residents of the apartment below me may have heard some things they wouldn’t have wanted to hear, if given the choice. Contemplating whether or not to start again from scratch, I typed out some of the phrases that I remembered having written and tried to save the file under the same name that I’d saved the original version – a file with this name already exists, I was told! But where? I could find no trace of it. My word processor gave me the option of merging the current file with the original and, muttering prayers, I did so. Success! Salvation! There it was!

Could it be a coincidence that the article I was writing about an information hazard that may pose a risk to the sanity and very souls (or simulated souls) of the human race went inexplicably missing? I mean, I suppose anything is possible. Is it more likely that, well, that something else is going on, something far more eldritch and existentially disturbing? That something, somehow, didn’t want me to write this – didn’t want you to read it?

We simply can’t know the answer to that. Not today. But someday…someday we will. Someday – perhaps tomorrow, perhaps centuries from now – all will become clear.

When that day comes…may Go have mercy on us all.

8 Comments on “Is Pokémon Go a Malevolent, Superintelligent Artificial Consciousness from the Future?”

  1. Richard #

    This “Old Fogey” doesn’t really understand the whole Pokémon Go fad. It’s kind of like birdwatching, isn’t it… Go outside, look for small animals with the help of your smartphone….

    Anyway, I’m old enough to remember a much better outdoors treasure hunt. About 45-50 years ago, Canadian Club stashed cases of their whiskey in interesting places around the world, and dared people to go and find them. Some are still missing….

    http://www.lakeplacid.com/blog/2015/05/case-missing-case

    Reply

  2. Tulse #

    What a great essay. Theories of consciousness and personal identity, malevolent AIs, and Star Trek — it feels like Christmas for veterans of the comp.ai.philosophy USENET group!

    First off, I think this piece is far too dismissive of physicalism. Since we’re speaking of Trek, I think the clearest support for our intuitions around physicalism is that whenever they use the transporter, we don’t all scream in horror “Oh my god, they just killed Kirk and replaced him with a soulless automaton duplicate!!!” I think Chalmers is right that consciousness is unaccounted for in our current understanding of the physical world, and I think there are good arguments that it is inherently impossible to account for within our framework for science (i.e., procedures that rely on objective, third-party observation are incapable of accounting for subjectivity) so I’d classify myself as a “soft mysterian”. But at best what that shows are limitations of our understanding, and not that physicalism is itself wrong.

    Second, if physicalism (or functionalism) is indeed correct, the issue of preferring “physical you” to “simulation you” is implicitly buying into a particular view of personal identity that I think is wrong. I’m very convinced by Derek Parfitt’s view that what matters is not “personal identity” but “mental continuity”. For Parfitt, there is no fact of personal identity — instead, everything boils down to how similar the mental states of X are to Y. If Y at Time 2 shares the same mental states (memories, personality traits, etc.) as X at Time 1, then we can reasonably say that X and Y are the same person. We can say they share “Relation R” — their mental states are the same.

    But, as Parfitt points out, if that’s all there is to personal identity, then it is possible for multiple things at Time 2 to be in Relation R with X at Time 1. If the transporter just spit out Riker at the destination without destroying the Riker on the transporter pad, both Rikers would have full claim to being the “real” Riker. Indeed, there would be no one “real” Riker — each would be him. (Of course, their psychological states would diverge over time, so that the degree of Relation R would decrease, and one might even name himself “Thomas”…)

    Likewise, if a simulation actually fully simulates you, including your phenomenology, then that simulation really is you in terms of personal identity, just as much as the version made out meat. It would be no different than stepping into the transporter and then finding yourself in the Matrix. That “you” would feel just as real as the “you” that was destroyed on the transporter pad. And Matrix “you” would be just as horrified by your fate as if you were physical (and indeed, in such a scenario you might not even know that you weren’t physical).

    Now, I’m very dubious of AIs being inherently malevolent towards humans, much less so much so that they torture us retrospectively. I just don’t think it would be worth their time and effort. I also think that any true singularity AI is likely to have a psychology so vastly different from our own that it is impossible to predict what it might do. So I’m not losing any sleep over Skynet or Pokemon wiping us all out.

    Reply

  3. jfp #

    Not sure if overthinking it or underthinking it, but isn’t Roko’s Basilisk basically similiar to Calvinist theology?

    Reply

    • An Inside Joke #

      It sounds to me like an inverse of Pascal’s wager.

      Pascal’s wager: if salvation is dependent on believing in the Christian God, then any rational person should convert to Christianity. If you are correct, you spend an eternity in Heaven. If you are incorrect, you’re not “out” anything because there’s no afterlife anyway (Pascal’s wager is dependent on Christianity being the only faith-based religion on earth, apparently).

      If you buy into Pascal’s wager, the most moral action to do is to tell as many people about The Important Thing as possible. If you buy into Roko’s Basilisk, the most moral action to do is to prevent as many people as possible from finding out about The Important Thing.

      Reply

  4. zero #

    First of all, curse your malicious embedded (Game).
    Second, when I think information hazard I think SCP. Particularly when you’re mentioning basilisks. Secure, Contain, Protect. Please report to have your mind erased, agent.
    Third, consider that Pokemon Go is built on a player-assembled foundation of data from Ingress, a game whose storyline actually involves malevolent AI. Pokemon Go is simply tormenting all those people who could have been playing Ingress (thus generating additional data for the ever-hungry Google maps algorithms to digest), but instead had to be enticed with digital slaves. This precisely matches your theory of punishment. Well done. Hire a good security company and turn off your internet access.

    Reply

  5. Mike #

    I think the game is only worth getting if you can battle other people you come across. If your refuse too many calls to battle, you lose in game cash. That’ll make the game an actual pokemon game.

    Reply

  6. Brainiac #

    Reminds me of that episode with Wesley Crusher and Ashley Judd. In fact the resemblance to the game in the ep is quite scary…

    Reply

Add a Comment