Post
Topic
Board Altcoin Discussion
Re: The 2.0 throwdown thread
by
TPTB_need_war
on 22/10/2015, 16:22:31 UTC
In a controlled environment its already demonstrated 100k tps.. The bottleneck is the network capabilities rather than the core code which is what it demonstrates. Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

You are obscuring that what you just wrote effectively means, "this is how many TX/s a fast server CPU can process and we eliminated any sense of a real network from the test".

Once network capabilities scale up naturally, then 100k comes possible as demand sees fit.

This is like saying once we redesign our coin (or monopolize the DPOS witnesses with our own corporate funded super hardware because DPOS is really just a way to obscure that we are paying dividends to ourselves, analogous to how masternodes in Dash was ostensibly a way obscuring that those huge interest fees were going to those who owned the most coins), it will get faster. Let's rather talk about what a coin can do now, today, in the real world of decentralized witnesses of varying capabilities.

Obscuring instamines, and other means (cheap pre-sales of ProtoShares that were converted to Bitshares?) of having control over large percent of the coins and then setting up a paradigm where coins can be parked to earn dividends. Hmmm. How dumb are we. Hey more power to them if investors are gullible enough to buy it. But it all starts to fit together in terms of analyzing why they would even think they could have a uniform capability across all witnesses.

Your assumption which led to a dozen more about which witness is next to sign a block is incorrect, thus your derived assumptions also incorrect. Thus you really have no claim ablut bitshares and the tps without fully understanding the concepts behind the tests and feature itself.

If you would be kind enough, you are welcome to cite a reference document. I was working from the official description of DPOS at the official website. As I wrote, I will edit my post for corrections if any one provides information. You have not yet provided any information. So far I read only your (perhaps incorrect) spin on the matter.

Why not read the code?

Again the bottleneck is in the consensus code which was been optimized so that it is possible to do more than 100k tps, a bitcoin controlled environment cant do this because of the bottleneck outside of network constraints. By leveraging LMAX technology and applying it to blockchains they were able to increase efficiency in validating and signing blocks. Propogation is always an issue.. Which is where scaling up network parameters helps and is totally feasible which multiple markets are betting on and will benefit. Because there is no mining it is possible off the bat, and now optimized to create more tps. Dpos allows them to maximize decentralization while remaining anonymous and even so with bitshares following regulatory rules gives less incentive from a regulation attack than bitcoin.

With fiberoptic internet would bitcoin be able to do 100k tps? No.

Lmax does 100k in 1ms latency http://www.infoq.com/presentations/LMAX

On the use of lmax in bts https://bitshares.org/technology/industrial-performance-and-scalability/

Increasing network params will only help bitcoin by helping with the regulation attack but not scale up in tps as efficiently. Today btc is restricted to 7tps at 1mb so its orders of magnitudes off and id argue that dpos is still more decentralized than using LN to increase tps and use bitcoin as a settlement platform.

As I wrote from the start of this, Bitshares 2.0 has optimized the witness code so the CPU can scale to 100,000 TX/s, but not only are they apparently requiring on the order of LMAX's 1ms network latency to achieve it, but I haven't read where they've modeled DoS service attacks on the transaction propagation network at such high TX/s. Real-time systems are not only about average throughput but also about CIS (guaranteed reliability and continuous throughput). If you are sending your real-time payment through and the next 10 witnesses that are queued in the chosen order are DoS attacked so they are unable to receive the transactions, then they can't complete their function. That is a fundamental problem that arises from using PoS as the mining method if you claim such high TX/s across variable hardware and network capabilities of nodes (those PoS claiming more conservative TX/s and block times are thus less likely to bump into these issues external to the speed of updating the ledger in the client code). They can adopt counter measures, but it is going to impact the the maximum TX/s rates to the downside, perhaps significantly.

I am not even confident they can maintain 100 TX/s on a real-world network today composed of a myriad of witnesses capabilities under a DDoS attack. Someone needs to do some modeling.