Basing it on RAM is even more foolish.
While most consumer grade hardware only supports ~16GB per system and the average computer has likely ~4GB there already exists specialized motherboards which support up to 16TB per system. This would give commercial miner 4000x the hashing power of average node. A commercial miner is always going to be able to pick the right hardware to maximize yield. Limiting the hashing algorithm by RAM wouldn't change that.
And they get this 16TB of RAM for free? RAM is expensive, and the kind of RAM usually used on servers is more expensive than consumer RAM. And again, even if they manage to make it a bit more efficient it's not close to competing with already having a computer.
BTW 2GB would be a poor choice as many GPU now have 2GB thus the entire solution set could fit in videoram and GDDR5 is many magnitudes faster than DDR3 (desktop ram).
You need 2GB per instance. You can't parallelize over this 2GB bringing all the GPU's ALUS to bear. GPU computation and RAM are very parallel but not "fast", this takes away their advantage.
Sure we don't want a monopoly but as long as no entity achieves a critical mass we also don't need 200K+ nodes either. If you are worried about the strength of the network a better change would be one which has a gradually decreasing efficiency as pool gets larger. i.e. a non-linear relationship between hashing power and pool size. This would cause pools to stabilize at a sweet spot that minimizes variance and minimizes the effect of non-linear hashing relationship. Rather than deepbit having 50% and the next 10 pools having 45% and everyone else making up 5% you likely would see the top 20 pools having on average 4% of network capacity.
There's no need for that, the "deepbit security problem" is only because of an implementation detail. Currently the pool both handles payments and generates getwork, but there's no need for this to be the case. In theory miners can generate work themselves or get it from another node and still mine for the pool. Also, things like p2pool (as a substrate for proxy pools) can do away with the need for giant centralized pools to reduce variance.