As I understand it proof of work was intended originally to distribute the block production ability as randomly as possible - anyone with a CPU.
It wasn't intended to distribute the block production as randomly as possible. Whoever had the most computational power should have the highest chances of getting rewarded.
Also:
If the network becomes very large, like over 100,000 nodes, this is what we'll use to allow common users to do transactions without being full blown nodes. At that stage, most users should start running client-only software and only the specialist server farms keep running full network nodes, kind of like how the usenet network has consolidated.
For now, everyone just runs a full network node.
I anticipate there will never be more than 100K nodes, probably less. It will reach an equilibrium where it's not worth it for more nodes to join in. The rest will be lightweight clients, which could be millions.
At equilibrium size, many nodes will be server farms with one or two network nodes that feed the rest of the farm over a LAN.
The current system where every user is a network node is not the intended configuration for large scale. That would be like every Usenet user runs their own NNTP server. The design supports letting users just be users. The more burden it is to run a node, the fewer nodes there will be. Those few nodes will be big server farms. The rest will be client nodes that only do transactions and don't generate.
I'm making this post to see what everyone thinks of this. Is it an exciting idea for you? Do you think I'm deluded?
Would you mind explaining to us how will you achieve this?