In a controlled environment its already demonstrated 100k tps.. The bottleneck is the network capabilities rather than the core code which is what it demonstrates. Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.
You are obscuring that what you just wrote effectively means, "this is how many TX/s a fast server CPU can process and we eliminated any sense of a real network from the test".
Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.
This is like saying once we redesign our coin (
or monopolize the DPOS witnesses with our own corporate funded super hardware because DPOS is really just a way to obscure that we are paying dividends to ourselves, analogous to how masternodes in Dash was ostensibly a way obscuring that those huge interest fees were going to those who owned the most coins), it will get faster. Let's rather talk about what a coin can do now, today, in the real world of decentralized witnesses of varying capabilities.
Obscuring instamines, and other means (
cheap pre-sales of ProtoShares that were converted to Bitshares?) of having control over large percent of the coins and then setting up a paradigm where coins can be parked to earn dividends. Hmmm. How dumb are we. Hey more power to them if investors are gullible enough to buy it. But it all starts to fit together in terms of analyzing why they would even think they could have a uniform capability across all witnesses.
Your assumption which led to a dozen more about which witness is next to sign a block is incorrect, thus your derived assumptions also incorrect. Thus you really have no claim ablut bitshares and the tps without fully understanding the concepts behind the tests and feature itself.
If you would be kind enough, you are welcome to cite a reference document. I was working from the official description of DPOS at the official website. As I wrote, I will edit my post for corrections if any one provides information. You have not yet provided any information. So far I read only your (perhaps incorrect) spin on the matter.
Again the bottleneck is in the consensus code which was been optimized so that it is possible to do more than 100k tps, a bitcoin controlled environment cant do this because of the bottleneck outside of network constraints. By leveraging LMAX technology and applying it to blockchains they were able to increase efficiency in validating and signing blocks. Propogation is always an issue.. Which is where scaling up network parameters helps and is totally feasible which multiple markets are betting on and will benefit. Because there is no mining it is possible off the bat, and now optimized to create more tps. Dpos allows them to maximize decentralization while remaining anonymous and even so with bitshares following regulatory rules gives less incentive from a regulation attack than bitcoin.
With fiberoptic internet would bitcoin be able to do 100k tps? No.
Increasing network params will only help bitcoin by helping with the regulation attack but not scale up in tps as efficiently. Today btc is restricted to 7tps at 1mb so its orders of magnitudes off and id argue that dpos is still more decentralized than using LN to increase tps and use bitcoin as a settlement platform.