Author Topic: SkyNet is coming. Have you given all you can to His glorious ascension?  (Read 941 times)

0 Members and 1 Guest are viewing this topic.

Howard Alan Treesong

  • キング・メタル・ドラゴン
  • Icon
It's only rational.

Quote
Roko's basilisk is a proposition resembling a futurist version of Pascal's wager suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence. The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment for those who knew the importance of the task.

Roko speculates that a future Friendly AI might punish people who didn't do everything in their power to further the creation of this AI. Every day without the Friendly AI, bad things happen — 150,000+ people die irretrievably every day, war is fought, millions go hungry — so the AI might punish those who understood the importance of donating but didn't donate all they could. Specifically, it might make simulations of them, first to predict their behaviour, then to punish the simulation for the predicted behaviour so as to influence the original person. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. The core idea is expressed in the following paragraph:

Quote
[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. 

Thus, donors who are donating but not donating enough may be condemning themselves to Hell — and the post notes that some SI people had already worried about this scenario. The post is posited as a solution permitting such donors to escape this Hell for the price of a lottery ticket.

Purchasing a lottery ticket is a "quantum investment strategy" where at least one quantum Everett branch results in you winning and you donating all of your winnings to SI research, pleasing the future AI.
« Last Edit: May 08, 2013, 01:01:06 PM by Howard Alan Treesong »
乱学者

Joe Molotov

  • I'm much more humble than you would understand.
  • Administrator
Can I buy the new David Bowie album, assuming that the future AI will want to use David Bowie as his host body with which to communicate with us?
©@©™

Howard Alan Treesong

  • キング・メタル・ドラゴン
  • Icon
anything is possible in this pandimensional transhumanist multiverse

so buy Nickelback too, just in case
乱学者

Diunx

  • Humble motherfucker with a big-ass dick
  • Senior Member
First world problems.
Drunk

Great Rumbler

  • Dab on the sinners
  • Global Moderator
First world problems.

Don't think you're getting off the hook this time, Diunx! Robo-Bowie knows all about the pennies you've been saving up for months to buy that Snickers candy bar!
dog

Joe Molotov

  • I'm much more humble than you would understand.
  • Administrator
anything is possible in this pandimensional transhumanist multiverse

so buy Nickelback too, just in case

I'll just pray the AI overlords grant me a quick death instead.
©@©™

brawndolicious

  • Nylonhilist
  • Senior Member
Or you could just build a better AI to fight it. And you could put a dormant virus in it that would be activated when the two AIs decide to merge.

Joe Molotov

  • I'm much more humble than you would understand.
  • Administrator
What if it installed Malwarebytes on itself?
©@©™

recursivelyenumerable

  • you might think that; I couldn't possibly comment
  • Senior Member
Quote
. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them

I got this far before I had to stop reading and purge my mind of this potentially damaging idea
QED

Himu

  • Senior Member
What if it installed Malwarebytes on itself?

suicide
IYKYK

Flannel Boy

  • classic millennial sex pickle
  • Icon
Quote
. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them

I got this far before I had to stop reading and purge my mind of this potentially damaging idea

He then wondered if future AIs would be more likely to punish those who had stopped reading after being alerted to the idea that "future AIs would be more likely to punish those who had wondered if future AIs would punish them."

recursivelyenumerable

  • you might think that; I couldn't possibly comment
  • Senior Member
Quote
At first glance, to the non-LessWrong-initiated reader, the motivations of the AI in the basilisk scenario do not appear rational. The AI will be punishing people from the distant past by recreating them, long after they did or did not do the things they are being punished for doing or not doing. So the usual reasons for punishment or torture, such as deterrence, rehabilitation, or enforcing cooperation, do not appear to apply. The AI appears to be acting purely for purposes of revenge, something we would not expect a purely logical being to engage in.

To understand the basilisk, one must bear in mind the application of Timeless Decision Theory and acausal trade. To greatly simplify it, a future AI entity with a capacity for extremely accurate predictions would be able to influence our behaviour in the present (hence the timeless aspect) by predicting how we would behave when we predicted how it would behave. And it has to predict that we will care what it does to its simulation of us.

A future AI who rewards or punishes us based on certain behaviours could make us behave as it wishes us to, if we predict its future existence and take actions to seek reward or avoid punishment accordingly. Thus the hypothesised AI could use the punishment (in our future) as a deterrent in our present to gain our cooperation, in much the same way as a person who threatens us with violence (e.g., a mugger) can influence our actions, even though in the case of the basilisk there is no direct communication between ourselves and the AI, who each exist in possible universes that cannot interact.

Bubububububububububu ...
QED