The entire discussion with you, you continue to ignore or miss the point that if it was possible to calculate the probabilities in real-time such that there would be some superior outcome locally or globally (meaning one outcome is objectively better than others along some metric, e.g. survival or robots and extinction or enslavement of humans), then that local or global result would collapse the present+future into the past and everything would be known within its context. Thus either it is global context and nothing exists [no (friction of) movement of time, present+future == past, i.e. no mass in the universe] or that local result collapses its context such that present+future = past, and thus must be disconnected from (unaffected by) the unknown global probabilities. (P.S. current science has no clue what mass really is, but I have explained what it must be)
I pointed out to you already that speed of computation or transmission does not transfer the entropy (because if it did, then the entropy would be destroyed and the present+future collapses into the past). Cripes man have you ever tried to manage something happening in another location over the phone or webcam? In fact, each local situation is dynamic and interacting with the diversity of the actors in that local situation. Even if you could virtually transmit yourself (or the master robot) to each location simultaneously, then if one entity is doing all the interaction then you no longer have diverse entropy. If instead your robots are autonomous and decentralized, the speed of their computation will not offset that their input entropy is more limited than the diversity of human entropy, because each human is unique along the entire timeline of DNA history through environment development in the womb. You see it is actually the mass of the universe and the zillion incremental interactions of chance over the movement of time that creates that entropy. You can't speed it up without collapsing some of the probabilities into each other and reducing the entropy-- study the equation for entropy (Shannon entropy or any other form of the equation such as the thermodynamic or biological form). In order to increase entropy, there must be more chance, i.e. more cases of failure not less. Remember the point I made about fitness and there being hypothetical infinite shapes and most failing to interlock. Computation is deterministic and not designed to have failure. Once you start building in failure into robots' CPU, then you would have to recreate biology. You simply won't be able to do a better job at maximizing entropy than nature already does, because the system of life will anneal to it. Adaption is all about diverse situations occurring simultaneously, but don't forget all the diverse failures occurring simultaneously also.
If you can know with high confidence what is more likely to happen, then you can anticipate your reaction before the immediate cause for it happens to the point of compensating for the few tenths of a second of lag from talking across the world. And you could also be prepared for numerous possible deviations from your expectation. And for many things, you don't need to start reacting faster than it takes for data to cross from one side of the world to the other.
Obviously the further into the future, the less accurate simplified models will get; but it is possible for machines to push the point where a simulation is no more accurate than random chance way further down the road way more than humans.
edit: Simulations can involve failures, and AIs can plan for failure as well as learn from it.
I mean the entire concept has flown right over your head and you continue repeating the same nonsense babble.
You just can't wrap your mind around what entropy and diversity are, and why speed of computation and transmission of data has nothing to do with it.
What makes you think evolving machines wouldn't have at least as much entropy and diversity as humans?