Post
Topic
Board Announcements (Altcoins)
Re: HEAT Discussion and Technical info
by
verymuchso
on 15/12/2017, 22:34:51 UTC
Hi,

I'd like to address some questions about the specifics of the `2000 tps` test.

Lets start with the specs of the PC mentioned before, the one on which the test was performed;

This would be my daily work horse PC:

   OS: Ubuntu 16.x
   RAM: 16GB
   CPU: Intel Core i5
   HD: 500GB SSD

Now about the test setup.

The HEAT server tested actually runs in my Eclipse Java IDE, this as opposed to running running HEAT server from the command line (its own JVM).
Running in Eclipse IDE, and in our case in DEBUG mode is quite the limiting factor i've experienced.
Running HEAT from the command line does not have the burden of also having to run the full Eclipse Java IDE coding environment as well as who knows what Eclipse is doing with regards to breakpoints and all and being able to pauze execution and all.

We have not yet tested this while HEAT runs from the command line, I expect it to be quite faster in that case.

Now about the websocket client app.

With the help of our newly beloved AVRO binary encoding stack we have been able to generate, sign and store to a file our 500,000 transactions. This process takes a while, a few minutes at least. But I dont think this matters much since in a real-life situation with possibly 100,000s of users the cost of creating and signing the transactions is divided over all those users.

The client app was written in Java and opens a Websocket connection to the HEAT server, since both are running on the same machine we would use the localhost address.

Now you might say; "Hey! wait a minute. Isn't localhost really fast? Isn't that cheating?"

The short awnser.. "No! And you might fail to understand what's important here."

While its absolutely true that localhost is much faster than your average external network, what should be obvious here is what levels of bandwidth we are talking about. I suppose anyone reading this has downloaded a movie before, be that over torrent or maybe streamed from the network. Well there is your proof that testing this over localhost has zero meaning here.

Your PC while downloading a movie or the youtube server streaming to you and probably 1000s of others the most recent PewDiePie video will process MUCH MUCH more data than our little test here.

One transaction in binary form is about 120 bytes in size, times 2000 and you'll need a network that has to support 240KB data transferred a second. Not sure what internet connections are normal in your country, but it seems here in Holland we can get 400Mbit connections straight to our homes, talking about standard consumer speeds here (looked it up just now).

To put that in perspective, 240KB a second is about 1/200th the amount of bandwidth you get with your 400Mbit(50MB) connection. You should understand by now that the network is not the bottle neck, its the cryptocurrency server.



So whats so special about HEAT you might ask, why does HEAT what HEAT can do?

Well for that you'd have to dive into the source code of our competitors, be them Waves, Lisk, NEM, IOTA, NXT etc.. Just this afternoon I've taken a look at IOTA source code, which is something thats always interesting (did the same with the others mentioned).

But I can tell you right now that none of the other currencies (Waves, Lisk, NEM, IOTA, NXT etc) will be able to reach similar speeds as HEAT has now shown it can.

Why I can claim this is pretty simple.

Cryptocurrencies, all of them basically (blockchain or tangle makes no difference here), all follow a similar internal design. And all of them need to store their; balances, transactions, signatures, etc etc... and they all use different databases to do so.

Some like NXT use the most slowest of all solutions which is a full fledged SQL database, others have improved models optimized for higher speed in the form of key/value datastores. IOTA today i've learned uses rocksdb, Waves is on H2's key value store, Bitcoin is on leveldb etc.

Afaik HEAT is the only one that does not use a database. Instead we've modeled our data in such a way that it can be written to a Memory Mapped flat file, which is how we store blocks and transactions. Our balances are kept all in memory and to support on-disk persistence we use https://github.com/OpenHFT/Chronicle-Map as our memory/on-disk hybrid indexing solution.

If you'd look at ChronicleMap's website you'll see that they: "Chronicle Map provides in-memory access speeds, and supports ultra-low garbage collection. Chronicle Map can support the most demanding of applications." Oh and did I mention this grew from the needs of HFT trading systems be faster than anything else available?



Anyways..

The next test is gonna be even cooler. We'll be hosting the single HEAT instance which will be forging blocks on a nice and powerfull machine, much faster than my PC, probably something with 64GB RAM and perhaps some 32 or 64 cores.. My estimate is that we can push the TPS to much higher levels on such a server.

Right now we are adding binary AVRO encoding support to HEAT-SDK and after that we'll release one of those samples we used to do a month or so ago with which you can fire transactions from your browser to our test setup. I bet that'll be fun.

- Dennis

I think you fail to see that testing all of this on a single server is absolutely useless. Yes you can improve transactions throughput to 2000tps by improving the software, yes you can do it by improving the hardware. But all of this only affects the single nodes speed, BUT all of this will not improve the consensus algorithm and you will not end up having a decentralized BLOCKCHAIN that can do 2000tps, all you get is a centralized database that can do 2000 database operations per second. Basically what you guys are doing is useless as you don't get a faster network and others don't even need your faster software. xD Soon Ethereum will release sharding and probably have more tps over a REAL NETWORK and you guys will still test your 2000tps on a single node.

The one who is failing to see things here is I you I'm afraid.

Before you can do any number of transactions over a peer 2 peer network you first need to feed those transactions (over a network) to a single peer and process them there... The single peer actually does the "consensus algorithm" internally already, also in our test, it works the same if you generate a block or if you receive one from the network. Receiving from the network actually being cheaper to process. So thats the first part where you are wrong. 

I fully admit in my first post that work has to be done to the p2p  code before peers can share blocks and transactions over the p2p network at those speeds.
But unlike what we achieved now, that second step is a rather simple problem to solve.

As for this combination: ETHEREUM + SHARDING + SOON.

This depends on your definition of SOON, sharding is a really hard problem and to apply that to a live blockchain worth billions. Basically doing the hard fork of all hard forks, rewriting the entire blockchain in the process. Also they have not agreed yet on what sharding on Ethereum should look like.

So its safe to say you are wrong there to.