Post
Topic
Board Development & Technical Discussion
Re: Dynamic Scaling?
by
Elliander
on 16/12/2017, 22:12:49 UTC
I realize that a block can be empty - with no transactions - but don't the node operators still have to download the full file sizes? I know a block always has these values:

Magic no (value always 0xD9B4BEF9)   - 4 bytes
Blocksize - 4 bytes
Blockheader - 80 bytes
Transaction counter - 9 bytes

Meaning that an empty block would require, at a minimum, 97 bytes. However, if the block size is set to 1 MB allowing that to be filled with transaction data, the question is if the nodes end up having to download a block file size of 1 MB if empty, or 97 bytes if empty.

97 bytes.

There is no required block size.

There is a block size LIMIT.  Meaning that blocks are not allowed to be LARGER THAN the limit.  Blocks smaller than the limit are perfectly valid and happen all the time.


That's good to hear.


Quote


Instead of long delays in making needed network changes, it could happen the moment a majority of the node operators are ready to handle it.

And how would you count "a majority of the node operators"? There is no reliable way to know how many node operators there are, and it is cheap and easy to set up millions of programs all pretending to be nodes in order to "stuff the ballot box".

By locking out node operators who are not scaled up with the majority, and providing financial incentive to node operators for higher bandwidth usage, it would create a similar arms race to the ASIC miners to ensure that the network can expand rapidly without the need for arguments over every little increase in block size.

The problem with autoscaling is that there isn't a reliable metric that can be used to determine when the size should scale up.

The difficulty can be scaled according to the time between blocks. This is a reliable metric.  An attacker can't change the time between blocks without actually completing the proof-of-work (in which case if they are able to do that, then the difficulty NEEDS to increase).


Good question and good points. To give an example, the last time I wrote a sorting algorithm for a CS class I measured the metrics by setting an integer to increment each time any checks were performed. In this way I was able to get reliable information separate from the processing power of a given machine. All I focused on was the core essence of what the program was doing.

So what does a node to? What is the core essence of it's functionality that acts to limit the capabilities of the network?

Quote
"in order to validate and relay transactions, bitcoin requires more than a network of miners processing transactions, it must broadcast messages across a network using 'nodes'. This is the first step in the transaction process that results in a block confirmation." - https://www.coindesk.com/bitcoin-nodes-need/

So then, a reliable metric for node operation is to keep track of the information being broadcast to the network. Since it's the capabilities of the network that we care about, maybe there would be a simple integer appended to the information broadcast to the network indicating if the node is below or near it's peak capabilities? If every node attached this information to the messages it transmits to the network, and then it was read as an aggregate from the completed blocks, it would only add 2 bytes to the minimum size for a block and then the block as a whole could be read to determine the average. The information transmitted is also an aggregate of various factors applied to the machine, to give an average which ends up being more of a vote.

So, as an example: Suppose we have 10 nodes participating in a specific block. 6 of them report a 0 - indicating that they are well below capacity. 3 reports a 1 to indicate that are near capacity. 1 reports a 2 indicating it is at it's limit. That means we have 6 votes to increase the cap, and 4 votes against, but only IF the current block size equals the current block size limit. If they are full, the next block has a slightly higher ceiling. The amount of which could be based on an additional digit within that integer as some signal to the network of how much more it can handle. The consequence is that the node which is at capacity won't be able to participate in as many transactions, so will get fewer votes.

That is a simplified example since a given node might have thousands of transactions that it has participated in, but that's OK. If it handles more transactions it has more votes, meaning that smaller nodes that are not as capable might participate in less transactions given it's self identified capability rating. Giving node operators who have a wallet attached to the node a piece of the transaction fee for participating in this process would provide incentive to a node operator to upscale to handle more transactions.

Now, if the reverse happens where a majority of node operators vote that they are at capacity, the network might cap there, or if the votes are leaning towards it being too much the network might scale back - even if the block isn't full. A formula could decide how much to scale it back as well.

I don't see any reason why we can't do this.