I don't need to know how Bitshares works in detail to apply general computer science theory to the claims made.
...ell seeing as Larimer himself said the solution is to "...keep everything in RAM..." how much RAM to do think is required to keep up with a sustained 100,000 tps if it is indeed true?
Just for the record, cross post from
https://bitcointalk.org/index.php?topic=1196533.msg12575441#msg12575441I want to address the MAJOR misconception and that is that we keep all transactions in RAM and that we need access to the full transaction history to process new transactions.
The default wallet has all transactions expiring just 15 seconds after they are signed which means that the network only has to consider 1,500,000 * 20 byte (trx id) => 3 MB of memory to protect against replay attacks of the same transaction.
The vast majority of all transactions simply modify EXISTING data structures (balances, orders, etc). The only type of transaction that increases memory use permanently are account creation, asset creation, witness/worker/committee member creation. These particular operations COST much more than operations that modify existing data. Their cost is derived from the need to keep them in memory for ever.
So the system needs the ability to STREAM 11 MB per second of data to disk and over the network (assuming all transactions were 120 bytes).
If there were 6 billion accounts and the average account had 1KB of data associated with it then the system would require 6000 GB or 6 TB of RAM... considering you can already buy motherboards supporting 2TB of ram and probably more if you look in the right places (http://www.eteknix.com/intels-new-serverboard-supports-dual-cpu-2tb-ram/) I don't think it is unreasonable to require 1 TB per BILLION accounts. Ok that clears that up, maybe he should be a bit more clear in future about what exactly "...keep everything in RAM..." means.
It still leaves a lot of questions unanswered regarding that claim though, specifically the IO related ones.
Streaming 11MB from disc doesn't sound like its too hard, but it depends on a number of factors. Reading one large consecutive 11MB chunk per second is of course childs play, but if you are reading 11MB in many small reads (or worse still if its a mechanical platter drive and is fragmented) then that simple task becomes not so simple.
Also, network IO seems to have some potential issues. 11MB/s down stream isn't too much of a problem, a 100Mbit downstream line will just about suffice, but what about upstream? I'm assuming (so correct me if Im wrong), that these machines will have numerous connections to other machines, and will have to relay that information to other nodes. Even if each node only has a few connections (10-20), but has to relay a large portion of those 100,000 tps to each of them, upstream bandwidth requirements for that node quickly approach multiple gigabits in the worst case.
Further more, lets assume that Bitshares is a huge success, is processing just 10,000 tps sustained and that none of these issues exist other than storage. As Bitshares relies on vertical scaling, and we've already determined that 100,000 tps = ~1TB of data a day, 10,000 tps = 100 GB daily. Operators of these machines are going to be spending a lot of dollar on fast drive space and have to employ sophisticated storage solutions in order to keep pace. This becomes quite insane at the 100,000 tps level (365TB per year), perhaps Bitshares has some chain pruning or mechanisms to keep this down? (I hope so!)
Finally back to RAM requirements, what are the measures or mechanisms in place to prevent someone from creating 1Billion or more empty accounts, and causing RAM requirements to shot upwards as this information is kept in RAM? A few machines could easily do this over the course of a couple of weeks if there are no other costs associated with it, I assume there is some filtering to only keep accounts with activity in RAM as otherwise this will be a major issue.
Eitherway, this is just another example how vertically scaled systems are not viable, should Bitshares grow to the level where it is processing 100,000s of transactions per second and has even a few 100M accounts, you need a machine with 100s of GB of RAM, 100s of TB of storage, and internet connections in the multiple GB speeds.....not really accessible to the man on the street.
Perhaps the cost of participating at that level just isn't an issue, as Bitshares has always had a semi-centralized element to it anyway, and most supporters of it don't seem to mind it. For me though, relying on ever increasing hardware performance and sacrificing core principles which brought us all here in the first place is a cop out.
2046483ms th_a application.cpp:516 get_item ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e
2046486ms th_a application.cpp:432 handle_transaction ] Got transaction from network
./run.sh: line 1: 8080 Segmentation fault ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain
This is what the init node said before it died during the flood. We are looking into what could have caused it.
As far as release plans go, we will protect the network from excessive flooding by rate limiting transaction throughput in the network code. We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS. That change had the side effect of making the network vulnerable to flooding. For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS. This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.
If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap. In other words, this should have 0 impact on customer experience over the next several months. By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.
So as can be evidenced here, while Bitshares did reach 1000 tps (peak!) during their internal tests, it caused stability issues that put the test to a halt. In order to resolve it, they have pegged the throughput to 100 tps max due to network IO issues.