Now that we can pause to describe a method for monitoring and replanning, we need to ask, “Can Utopia work?” This is a surprisingly tricky question. If we mean “Can we guarantee that sentience will always achieve Utopia?” then the answer is no, because the sentience could inadvertently arrive at a dead end from which there is no repair. For example, the human sentience might have faulty models of herself and not know that her existence can rot away along with her body. Once the body does, it cannot repair any plans to reach Utopia. If we rule out dead ends–assume that there exists a plan to reach the goal from any state in the environment–and assume that the environment is really nondeterministic, in the sense that a plan always has some chance of success on any given execution attempt, then the sentience will eventually reach Utopia.
Trouble occurs when an action is actually not nondeterministic, but rather depends on some precondition that the sentience does not know about. For example, sometimes a mind configuration may be unsatisfactory, so becoming isomorphic to that configuration has no effect towards Utopia. No amount of retrying is going to change this. One solution is to choose randomly from among the set of possible reconfiguration plans, rather than to try the same one each time. In this case, the reconfiguration plan of becoming another mind might work. A better approach is to learn a better model. Every prediction failure is an opportunity for learning; a sentience should be able to modify its model of the world to accord with its percepts. From then on, the replanner will be able to come up with a reconfiguration that gets at the root problem, rather than relying on luck to choose a good reconfiguration.