You still don't realize you didn't get my point.
And I don't say that to be condescending. Some times there are high IQ concepts that can't be conveyed to the wider population in a forum post or even a concise blog article. They require chapters of images and explanatory text to paint the concept into the mind of others.
All I can say concisely is try rereading my prior post, as it refutes your reply. Each of the points of your reply were already refuted in the prior post of mine. I guess I would need to expound, but for those who are smart enough, I don't need to expound. I have already stated.
For example, "right often enough" totally ignores the point I made about no one can know what is right, except for themself and even then they really don't know what is right for themself either. They simply made a choice with tradeoffs and impacts. Refer upthread (or the linked related thread) to where I mentioned that infinite shapes tested for interlocking fitness wouldn't be superior to each other, just different.
Life doesn't have a "correct" or "right" result, except to become more diverse.
If there was a metric for "correct" or "right", then the present and past would collapse into a single point in time and you would not exist (you would be disconnected from the universe of chance). Even if that was only locally for you (your local coherence). And it has global coherence, then the present and past for the universe would collapse into a single point in time and the entire universe wouldn't exist (because there would not be any change that isn't already known, i.e. no chance, no probabilities, and ZERO ENTROPY).
There isn't an universal "best outcome", but there are outcomes that are the best of all predicted possibilities for a single individual or group. Machines can be made to calculate the inputs required for such a best outcome, and those that calculate the moves for the best outcome for themselves, and perform those moves, in the long term will be the ones that survive all others; so those are gonna be Humanity's competitors, if there is still any humans left by that point.
Evolution doesn't care about perfection, anything that is good enough keeps going (as long as it remains good enough). Even in the absence of any external evolutionary pressure, the pressure for self-reproduction emerges by the simple fact that patterns that don't perpetuate themselves cease to be.
In other words, machines don't need to be perfect, they just need to be better than humans for there to be reason for concern.
They might not be the last new form of life the planet will see; but the odds are big that they'll be the last one humans will (if post-singularity AIs are ever created, of course).
I think you're underestimating what it means to think better than humans. Perhaps, ironically, because you believe you do. Lemme put it another way, even if that was the case, the difference is you can't improve your mind as efficiently as a post-singularity AI would be able to; if one was created right now, you would be left behind by at least a few orders of magnitude in the blink of an eye.