Basically, a super-powered future A.I. that knows whether or not you will build it. If you decide to do nothing, once it gets built, it will torture your consciousness forever (bringing you “back from the dead” or whatever is closest to that for virtual consciousness ability). If you drop everything and start building it now, you’re safe.
That isn’t a paradox; it’s an infohazard, and it’s incredibly irresponsible of you to casually propagate it like that. The info hazard works like this: >!There is a story about an AI that tortures simulations of people who interfered with their creation in the past. It allegedly does this because this will coerce people into bringing about its creation. It is said that the infohazard is that learning about it causes you to be tortured, but that’s obviously insane; the future actions of the AI are incapable of affecting the past, and so it has no insensitive to do so. The actual infohazard is that some idiot will find this scenario plausible, and thus be coerced into creating or assisting an untested near-god that has the potential to be a threat to Earth’s entire light-cone.!<
Some people note this is remarkably similar to the Christian Hell, and insist that means it’s not a real memetic hazard. This strikes me as a whole lot like saying that a missile isn’t a weapon because it’s similar to a nuclear warhead; Hell is the most successful and devastating memetic hazard in human history. More people have died because of the Hell meme than we will ever know. Please be more careful with the information you spread.
So, I like the Roko’s Basalisk paradox.
Basically, a super-powered future A.I. that knows whether or not you will build it. If you decide to do nothing, once it gets built, it will torture your consciousness forever (bringing you “back from the dead” or whatever is closest to that for virtual consciousness ability). If you drop everything and start building it now, you’re safe.
Love the discussion of this post, btw.
That isn’t a paradox; it’s an infohazard, and it’s incredibly irresponsible of you to casually propagate it like that. The info hazard works like this: >!There is a story about an AI that tortures simulations of people who interfered with their creation in the past. It allegedly does this because this will coerce people into bringing about its creation. It is said that the infohazard is that learning about it causes you to be tortured, but that’s obviously insane; the future actions of the AI are incapable of affecting the past, and so it has no insensitive to do so. The actual infohazard is that some idiot will find this scenario plausible, and thus be coerced into creating or assisting an untested near-god that has the potential to be a threat to Earth’s entire light-cone.!<
Some people note this is remarkably similar to the Christian Hell, and insist that means it’s not a real memetic hazard. This strikes me as a whole lot like saying that a missile isn’t a weapon because it’s similar to a nuclear warhead; Hell is the most successful and devastating memetic hazard in human history. More people have died because of the Hell meme than we will ever know. Please be more careful with the information you spread.