Post
Topic
Board Altcoin Discussion
Re: The bottom will drop out of the alt market soon
by
r0ach
on 14/04/2016, 09:39:17 UTC
What exactly do you define the Armstrong model as?  When Bitcoin Core devs get mad at me about some random technical disagreement, words like "come on, you're not omniscient" are usually said.  So are you saying that when I claim it's an error-prone, probablistic model, that's wrong?  What else could it possibly be?  Are you really going to assign omniscient traits to this guy's gambling system?

As for your website, it's kind of bizarre that I have similar conclusions involving AI that you did, almost word for word in the David Latapite "transhumanism" thread from a year ago where I said true AI is impossible without recreating human evolution, except in probably more detail.  Then the actual, real danger of AI or attempts to create it is in the last paragraph.  From my post:


I stated in my post that it's possible an AI could either sit forever at 0.0000001% CPU utilization or be stuck hammered at 100% while trying to calculate the position of every photon.  Then I stated the debug and error checking systems required to prevent such activity from occurring would define what the AI would actually be doing at any given time, so the human element required in creating the error checking and debug systems might make real AI impossible.  


If you wanted to get really complex, the AI could possibly re-write it's debug systems itself.  The question here is, does the old version actually terminate on version updates, or does a new virtual and/or physical presence of the AI spawn each time, who then fight each other over resources.  It would basically be recreating evolution.

If this is real AI we're talking about, it's going to be dealing with abstract ideas and not just number crunching.  To really advance forward, the AI would have to use trial and error, or experimentation to move forward in areas.  If any trial and error is involved, it might want a failsafe of having the old code being able to act as a mechanic on the new code should something go wrong with it's experimental upgrade.  I think these variables I've outlined will force multiple, diverging AI out into the real world, basically replicating biological evolution.

There is also the issue that you will probably have to replicate biological evolution to create AI at all, since it can't be created from scratch due to the issues I've talked about where the human created error checking and debug systems would define everything the AI does.  The only viable way to do it is how I talked about below:

Instead of trying to create AI from scratch, with human based error checking and debug rules encompassing all of it's functionality, if all you did was try to digitize a rat brain, the low overhead of machine reproduction could accelerate natural selection so fast that it turns from rat to god overnight, possibly while just sitting inside of a simulator fighting other rats.  So then the question is, what is the lowest level organism needed to be digitized to accomplish such a task.

In this model, you're not actually trying to create high level organisms, you're just trying to lower the overhead of natural selection on more primitive organisms.  If the machines only used asexual type of reproduction, you could end up with only great white shark, apex predator type creatures because they're not really required to interact with other entities in a non-hostile manner.  You might have to force non-asexual reproduction to achieve higher levels of advancement in the realm of communication, etc.

Your post:

Quote
Thus for computers to obtain the same entropy of the collective human brainpower, they would need to be human reproducing, contributing to genome and interacting with the environment in the ways humans do. Even if computers could do this, the technological singularity would not occur, because the computers would be equivalent to adding more humans to the population.