Using blockchain FIFO, we should only be bandwidth limited. Have not heard any objections to using FIFO for blockchain. That solves the giant HDD usage. binary format and adaptive max TPS will allow support of 1000TPS bursts with 100kbps.
this means in Belarus, bursts of 10000 TPS will not be a problem
James
P.S. Still waiting for feedback on blockchain FIFO, not sure why nobody does this.
Just checked back for the previous post and this FIFO idea makes sense to me anyway. The high volume transaction systems I am talking about achieve this by DB partitioning - i.e. they keep the transactions they care about or are live in one partition and shove all the others elsewhere - this helps keep the processing speed high.
These systems can be viewed as centralised because typically multiple processing nodes are localised to a single database(block chain?) but in practice the transaction processors are all async and running in parallel sharing the DB(block chain) copy, the NXT equivalent is all transaction processors or forgers need a copy of the DB or block chain - its been highlighted in a number of posts this is not scaleable and needs fixing so FIFO or something like it to move block chain history off the transaction processors (forgers) will help.
No need for any centralization using blockchain FIFO. It just might take a while to reparse so many gigabytes of blockchain from genesis, so every weekly checkpoints would help immensely.
Each node should be able to process at full speed, even with NXT VM (turing complete scripts), since we can match the fees for NXT VM to use less than 50% CPU while it is waiting for bandwidth. My guess is NXT core is currently using small percentage of CPU, maybe even less than 5% on average?
As soon as a node is caught up to current block, then all it needs is enough bandwidth to stay caught up. 100kbps is my estimate for peak 1000TPS and 250TPS sustained capability.
James