I supposed that in such a situation, Minecraft would constitute a universe. The Simulation Hypothesis, which I delved into in my second post, is a philosophical exercise and a narrative device in which we imagine that the universe in which we live is a simulation. I'd like to examine a few of the finer points of these two ideas, and so leaving aside for now the idea of who the creators of such a simulation might be, let's examine some of the ways such a thing might be structured.
First off, what is Herobrine in our example? The idea of a brain-in-a-vat is really just another construct. While we could do so, it's not necessary for us to imagine our mind as an actual brain in a vat. It might be something like Neo and the rest of humanity in The Matrix, or it might be something emergent. The construct is helpful because it allows us to get quickly to the idea that a mind encountering only one set of universal rules is not going to have accessed anything outside of them and so it will, among other things:
a: Accept things that are consistent with the rules it has learned.
b: Reconcile things that aren't if they become consistent as time goes on.
c: Probably come to ignore anything anomalous if it doesn't repeat.
Now, the image of a brain-in-vat intelligence that I'd really like to look at is the emergent one, because it simplifies the problem of that intelligence's care by someone outside the system.
If Herobrine is an actual, literal brain floating in a vat, connected to a computer running Minecraft through some sort of invasive BCI version of a Virtual Reality interface, then that brain has to be supplied with nutrients and cared for physically. If it starves or is injured or killed, then the simulation is useless, because the subject is compromised. Minecraft keeps going, but the player character sits there and does nothing--the invested mind is gone.
Additionally, if the simulation stops because the computer running it is damaged or destroyed, or for whatever reason, turned off, then the invested mind ceases to receive sensory data, but continues to exist. You can go ahead and imagine how traumatic that would be.
We can easily remove one of these entities al la ockham's razor, and simplify the problem by positing a simulated universe in which the mind or minds invested are just taking advantage of emergent properties of the system. A set of incredibly powerful computers, say, that assume only a very few things about the way matter, energy, and information interact, which then set off something like a big bang and then watch what happens as those rules result in an increasingly complex universe, in which minds like ours naturally arise. If consciousness itself is the holy grail of such a system, then it doesn't matter exactly what the life forms supporting it are like, as long as they're conscious and invested in the universe. The same philosophical problems as we are encountering ourselves, in our own universe, are most probably going to be the end result of such an arrangement. Without any idea how their universe came about, such minds are going to be asking themselves 'what's going on,' or 'why am I here,' with about the same frequency as us, and getting just about as much in the way of answers.
At this point, we've eliminated one problem on our end. We don't have to take care of the invested minds physically. We don't even have to interact with them (and such interaction would be incredibly problematic if we did). The entire thing is contained directly within the computer system. We also don't need the Matrix's cumbersome attempt to figure out a role for humanity outside the system, because there isn't one.
I'll get into some of the other implications of this in a later post, but I do want to say one thing more before I close this one. The philosophical problem of the nature and relative meaning of the universe is not solved here, it's just shunted one level higher. If we imagine ourselves creating a universe with invested minds like the ones I'm describing (and I think Nick Bostrom does get into this, much as I have here), then while those minds might wonder how we would answer those questions were they able to ask us, we'd only be able to shrug.