The theory, which I probably misunderstand because I have a similar level of education to a macaque, states that because a simulated world would eventually develop to the point where it creates its own simulations, it’s then just a matter of probability that we are in a simulation. That is, if there’s one real world, and a zillion simulated ones, it’s more likely that we’re in a simulated world. That’s probably an oversimplification, but it’s the gist I got from listening to people talk about the theory.

But if the real world sets up a simulated world which more or less perfectly simulates itself, the processing required to create a mirror sim-within-a-sim would need at least twice that much power/resources, no? How could the infinitely recursive simulations even begin to be set up unless more and more hardware is constantly being added by the real meat people to its initial simulation? It would be like that cartoon (or was it a silent movie?) of a guy laying down train track struts while sitting on the cowcatcher of a moving train. Except in this case the train would be moving at close to the speed of light.

Doesn’t this fact alone disprove the entire hypothesis? If I set up a 1:1 simulation of our universe, then just sit back and watch, any attempts by my simulant people to create something that would exhaust all of my hardware would just… not work? Blue screen? Crash the system? Crunching the numbers of a 1:1 sim within a 1:1 sim would not be physically possible for a processor that can just about handle the first simulation. The simulation’s own simulated processors would still need to have their processing done by Meat World, you’re essentially just passing the CPU-buck backwards like it’s a rugby ball until it lands in the lap of the real world.

And this is just if the simulated people create ONE simulation. If 10 people in that one world decide to set up similar simulations simultaneously, the hardware for the entire sim reality would be toast overnight.

What am I not getting about this?

Cheers!

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 months ago

    The argument with “it doesn’t have to be a realtime simulation” is good.

    Also: Why should the same rules of physics we have, apply to the world that runs the simulation? Maybe they have infinite energy and different physics. We can’t apply our physics to other, different universes.

    And we use a very small amount of electricity on earth. A few petawatts as far as I know. We can’t even imagine what’s possible for a civilization who harvests a substancial amount of energy from their sun. Or has nuclear fusion power plants available. That should immediately allow for a simulation a few layers deep.

    And you don’t need to simulate every molecule in the universe for a good simulation. Maybe there is a trick to it. For a computer game we also don’t simulate atoms and real gravity. And it’s believable, nontheless. So it doesn’t even have to scale exponentially. There could be a way to make it much more managable and not make it much more complicated with every layer.

    Strictly speaking you only need to simulate the state of mind and the sensory input of a few billion people. Or less. Or one person. If they choose to “build” a simulation themselves, it’s just the things necessary for their perception that need to be handled.

    I’d say IF we live in a simulation… It’s most likely running in a world that has in fact improbably many resources available. And laws of physics that allow for that.