Not only can you not observe life, you cannot even Google.
0 Members and 1 Guest are viewing this topic.
Roko's basilisk is a proposition resembling a futurist version of Pascal's wager suggested by a member of the rationalist community LessWrong, which speculates about the potential behavior of a future godlike artificial intelligence. The claim is that this ultimate intelligence may punish those who fail to help it (or help create it), with greater punishment for those who knew the importance of the task. Roko speculates that a future Friendly AI might punish people who didn't do everything in their power to further the creation of this AI. Every day without the Friendly AI, bad things happen — 150,000+ people die irretrievably every day, war is fought, millions go hungry — so the AI might punish those who understood the importance of donating but didn't donate all they could. Specifically, it might make simulations of them, first to predict their behaviour, then to punish the simulation for the predicted behaviour so as to influence the original person. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them. The core idea is expressed in the following paragraph: Quote[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian. Thus, donors who are donating but not donating enough may be condemning themselves to Hell — and the post notes that some SI people had already worried about this scenario. The post is posited as a solution permitting such donors to escape this Hell for the price of a lottery ticket.
[T]here is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. ... So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished. But of course, if you're thinking like that, then the CEV-singleton is even more likely to want to punish you... nasty. Of course this would be unjust, but is the kind of unjust thing that is oh-so-very utilitarian.
First world problems.
anything is possible in this pandimensional transhumanist multiverseso buy Nickelback too, just in case
. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish them
What if it installed Malwarebytes on itself?
Quote. He then wondered if future AIs would be more likely to punish those who had wondered if future AIs would punish themI got this far before I had to stop reading and purge my mind of this potentially damaging idea
At first glance, to the non-LessWrong-initiated reader, the motivations of the AI in the basilisk scenario do not appear rational. The AI will be punishing people from the distant past by recreating them, long after they did or did not do the things they are being punished for doing or not doing. So the usual reasons for punishment or torture, such as deterrence, rehabilitation, or enforcing cooperation, do not appear to apply. The AI appears to be acting purely for purposes of revenge, something we would not expect a purely logical being to engage in. To understand the basilisk, one must bear in mind the application of Timeless Decision Theory and acausal trade. To greatly simplify it, a future AI entity with a capacity for extremely accurate predictions would be able to influence our behaviour in the present (hence the timeless aspect) by predicting how we would behave when we predicted how it would behave. And it has to predict that we will care what it does to its simulation of us. A future AI who rewards or punishes us based on certain behaviours could make us behave as it wishes us to, if we predict its future existence and take actions to seek reward or avoid punishment accordingly. Thus the hypothesised AI could use the punishment (in our future) as a deterrent in our present to gain our cooperation, in much the same way as a person who threatens us with violence (e.g., a mugger) can influence our actions, even though in the case of the basilisk there is no direct communication between ourselves and the AI, who each exist in possible universes that cannot interact.