• 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle
  • I assume they wanted to make Zelensky seem desperate and unhinged. But this backfired, since Zelensky remained calm throughout, but Trump and Vance flipped out when Zelensky pointed out that the distance alone won’t protect Americans from the fallout of further Russian aggression. And most EU leaders professed their support for Ukraine after the meeting. So, basically, the shouting was planned, it was just that they planned for Zelensky to do the shouting.



  • Nice story. I don’t like how in the beginning, several paragraphs open up with “L1”.

    Also, one hard counter to assassination markets would be to simply obscure details of the death. Say the guy survived the first attack, then claim a second attack while he was in the hospital killed him. Wikipedia would be unable to provide the correct details for this mechanism for quite some time, and the risk that the wrong data gets entered would still be substantial. Alternatively, have the target predict his own death, and then release a fake report of the death. Claim the bounty, empty the pool.








  • Roko’s Basilisk hinges on the concept of acausal trade. Future events can cause past events if both actors can sufficiently predict each other. The obvious problem with acausal trade is that if you’re the actor B in the future, then you can’t change what the actor A in the past did. It’s A’s prediction of B’s action that causes A’s action, not B’s action. Meaning the AI in the future gains literally nothing by exacting petty vengeance on people who didn’t support their creation.

    Another thing Roko’s Basilisk hinges on is that a copy of you is also you. If you don’t believe that, then torturing a simulated copy of you doesn’t need to bother you any more than if the AI tortured a random innocent person. On a related note, the AI may not be able to create a perfect copy of you. If you die before the AI is created, and nobody scans your brain (Brain scanners currently don’t exist), then the AI will only have the surviving historical records of you to reconstruct you. It may be able to create an imitation so convincing that any historian, and even people who knew you personally will say it’s you, but it won’t be you. Some pieces of you will be forever lost.

    Then a singularity type superintelligence might not be possible. The idea behind the singularity is that once we build an AI, the AI will then improve itself, and then they will be able to improve itself faster, thus leading to an exponential growth in intelligence. The problem is that it basically assumes that the marginal effort of getting more intelligent grows slower than linearly. If the marginal difficulty grows as fast as the intelligence of the AI, then the AI will become more and more intelligent, but we won’t see an exponential increase in intelligence. My guess would be that we’d see a logistical growth of intelligence. As in, the AI will first become more and more intelligent, and then the growth will slow and eventually stagnate.