I feel like you left out the actual interesting parts of my post and only left the primitive parts. I stated in my post that it's possible an AI could either sit forever at 0.0000001% CPU utilization or be stuck hammered at 100% while trying to calculate the position of every photon. Then I stated the debug and error checking systems required to prevent such activity from occurring would define what the AI would actually be doing at any given time, so the human element
required in creating the error checking and debug systems might make real AI impossible.
Original post below, I feel the last sentence is the most interesting possibility:
If you wanted to get really complex, the AI could possibly re-write it's debug systems itself. The question here is, does the old version actually terminate on version updates, or does a new virtual and/or physical presence of the AI spawn each time, who then fight each other over resources. It would basically be recreating evolution.
"Will one day machine be a smart as humans? - Yes, but not for long" (because very quicly, it will become much smarter than humans)
The question of terminating its previous self depends on how the self-preservation routines are coded and handled. If the machine can convince itself that "dying is not dying", it can work. An irrational system (human brain) can do it (going to heaven). I have the intution that a rational system (a computer) can do it (no loss of meaningful information = no dying).