Again, there is no "paradox" here.
-snip-
Your argument is a waste of time (no offense). This is not helpful at all.
All I'm saying is that bitcoin can be viewed as different things, depending the time you live in.
If you live in 1995, it could be "impossible"
If you live in 2015, it could be "problematic to scale"
If you live in 2035, it could be "fantastic"
...and the basic algorithm could be practically the same, the only thing that will have changed is the way the hardware and networks have gone upwards in capabilities by 1000x to 10000x.
So the problems are not necessarily "inherent" to bitcoin, but rather a result of a software-hardware-network ...equation.
Obviously, by improving the software right now we can do more with scaling. For example I was reading about ASICs doing the validation. Well, why ASICs and not GPUs for start? This should be relatively easy to port, with opencl and stuff - so it would allow cpu+gpu to be exploited for max performance. The processing part could take a boost there for sure because GPUs are much much faster.
As for storage, that's where we'll need a breakthrough I suspect. Compression will probably be one that gives serious gains (a simple gzip on a ~130mb block.dat takes me down to something like 100mb), but since CPUs are slow with the compression/decompression of large data sets, maybe it too should be given as a task to GPUs in multiple "blocks" of compressed data to exploit GPU parallelism. If a computer can handle compressed data with virtually no lag, then perhaps there can be a positive spillover effect to the network itself by the transmission of compressed data packages which are compressed/decompressed in realtime by the GPUs for near-zero lag. I think harnessing GPU power is definitely something worth exploring for future scaling, in more than one ways.