I didn’t get to talk to the owner.
I didn’t get to talk to the owner.
I live in Germany, and I spotted one of these trucks recently. It looked huge compared to every other vehicle on the road, and one of those was a delivery van. And it was too big for its parking spot. It also had a confederate flag in the back window.
Self control is a road to many abilities, some of which are considered unnatural, such as not drinking coffee. My personal favorite is to connect to points in space time together, to create a hole to crawl from one side to the other, for example, to connect the university of Tubingen and Boulogne-sur-Mer.
I’m not sure we are discussing the same aspect of this mind experiment, and in particular the aspect of it that i find lovecraftian is that you may already be in the simulation right now. This makes the specific circumstances of our world, physics, and technology level irrelevant, as they would just be a solipsistic setup to test you on some aspect of your morality. The threat of eternal torture, on the other hand, would only apply to you if you were the real version of you, as that’s who the basilisk is actually dealing with. This works because you don’t know what of the two situations is your current one.
Wondering whether you are in a simulation or not is rather unproductive, as there’s basically nothing we can do about it regardless of what the answer is. It’s basically like wondering whether god exists or not. In the absence of clearly supernatural phenomena, the simpler explanation is that we are not in a simulation, as any universe which can produce the simulation is by definition at least as complex as the simulation. The definition I’m applying here is that the complexity of a string is its length or the length of the shortest program that produces it. Like, yes, we could be living in a simulation right now, and deities could also exist.
The song “Seele Mein” (engl: “My Soul” or “Soul is Mine”) is a about a demon who follows a mortal from birth to death and then carries off the soul for eternal torture. Interestingly, the song is from the perspective of the demon, and they gloss over the life of the mortal, spending more than half of the song on describing the torture. Could such demons exist? Certainly, there’s nothing that rules out their existence, but there’s also nothing indicating that they exist. So they probably don’t. And if you are being followed around by such a demon? Then you’re screwed. Theoretically, every higher being that has been though off could exist. A supercomputer simulating our reality falls squarely into the category of higher being. Unless we observe things are clearly caused by such a being, wondering about their existence is pointless.
The idea behind Roko’s Basilisk is as follows: Assume a good AGI. What does that mean? An AGI that follows human values. And since the idea originated on Less Wrong, this means utilitarianism. And it also means that we’re dealing with a superintelligence, since on Less Wrong, it’s generally assumed that we’re going to see a singularity once true AGI is reached. Because the AGI will just upgrade itself until its superintelligent. Afterwards it will bring about paradise, and thus create great value. The idea is now that it might be prudent for the AGI to punish those who knew about it, but didn’t do everything in their power to bring it to existence. Through acausal trade, the this would cause the AGI to come into existence sooner, as the people would work harder to bring it into existence for fear of torture. And what makes this idea a cognitohazard is that by just knowing about it, you make yourself a more likely target. In fact, people who don’t know about it, or dismiss the idea are safe, and will find a land of plenty once the AGI takes over.
Of course, if the AGI is created in, let’s say, 2045, then nothing the AGI can do will cause it to be created in 2044 instead.
Roko’s Basilisk hinges on the concept of acausal trade. Future events can cause past events if both actors can sufficiently predict each other. The obvious problem with acausal trade is that if you’re the actor B in the future, then you can’t change what the actor A in the past did. It’s A’s prediction of B’s action that causes A’s action, not B’s action. Meaning the AI in the future gains literally nothing by exacting petty vengeance on people who didn’t support their creation.
Another thing Roko’s Basilisk hinges on is that a copy of you is also you. If you don’t believe that, then torturing a simulated copy of you doesn’t need to bother you any more than if the AI tortured a random innocent person. On a related note, the AI may not be able to create a perfect copy of you. If you die before the AI is created, and nobody scans your brain (Brain scanners currently don’t exist), then the AI will only have the surviving historical records of you to reconstruct you. It may be able to create an imitation so convincing that any historian, and even people who knew you personally will say it’s you, but it won’t be you. Some pieces of you will be forever lost.
Then a singularity type superintelligence might not be possible. The idea behind the singularity is that once we build an AI, the AI will then improve itself, and then they will be able to improve itself faster, thus leading to an exponential growth in intelligence. The problem is that it basically assumes that the marginal effort of getting more intelligent grows slower than linearly. If the marginal difficulty grows as fast as the intelligence of the AI, then the AI will become more and more intelligent, but we won’t see an exponential increase in intelligence. My guess would be that we’d see a logistical growth of intelligence. As in, the AI will first become more and more intelligent, and then the growth will slow and eventually stagnate.
Sadly, yes. I live in Germany, and here you need a BahnCard50 (or better) for the train to be cheaper than the gas for driving.
Resisting Trump is a crime to these people.
Just as fast as a car if you run as fast as a car.
Eh, Levels bring a linear increase in strength and durability, while an effective attack doubles your damage output. So you’d need twice your opponents level to make up for type disadvantage. Of course, that’s assuming you’re fighting against a pokemon controlled by a human player. However, wild pokemon can’t take full advantage of their type advantage.
When interpreting the comic, I find it interesting to keep in mind that a wolf pack is a family unit, consisting of parents of children. So the wolf is taking the property for his family. The comic is advocating banditry, basically.
What they’re saying is that all rights are derived from force. The state that enforces your rights uses force to do so. This comic is mostly dunking on anarcho capitalists, in that they seemingly believe that property rights are magic.
Oh no, tankies are those who believe that anybody to the right of Stalin needs to be sent to the gulag.
Given Trump’s track record for keeping contracts, let alone promises, I doubt he has a good track record with returning favors.
Thanks for your reply. Are his insurance premiums going to go up?
What about the guy who’s space yacht you stole. Was he another player or an NPC? If he was another player, will he have to buy a new space yacht for real money?
Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.
The image depicts mature women, not children.
Technically the “both sides are the same” thing should refer to the fact that both sides are focusing on getting power in order to further their agenda (usually policy)
You can’t do anything without power.
I think it is called the network effect. People are still using Twitter because the messages they want to see are being posted there, and those messages are being posted there because that’s where the audience is. So, basically, people are locked in.
This also means that any loss in user count has a double effect, as not only users are lost, but the utility of the service for the remaining users decreases. So, what I’m saying is, if Elon continues this way, at some point there will be a large exodus of users from Twitter, as each loss of users reduces the utility of Twitter further, triggering a chain reaction.
Of course, we can’t know when that happens, and since we’re both on Lemmy, we’ve already self-selected as people with little tolerance for enshittification.
If Trump wins, funding for Israel will increase, and even more Palestinians will die. So basically, you’re valuing your purity over human lives. Which is quite fascist, if you think about it.
Yes, the idea is good, I just don’t trust AI to do a good job.