Post
Topic
Board Development & Technical Discussion
Re: Individual Block Difficulty Based on Block Size
by
foolish_austrian
on 16/02/2015, 04:34:56 UTC
An interesting concept. I need to spend some time digesting it but it is an interesting direction to take.   I think the biggest risk is the complexity factor.  Bitcoin is at a conceptual level very simple and yet the number of potential edge cases, attack vectors, and exploits is relatively large.  Take a more complex system and all the risks become harder to model.  Still that alone isn't a reason to discount it, just a risk factor.

I agree that simplicity is always preferable. I mulled the concept of adding additional POW into the Merkel tree, or somewhere parallel to the header, and those all added complexity and in the end didn't add to the network security. I really didn't like the idea of changing the header structure.

That said, I don't think this is all that complex, and to the average user they shouldn't care. All they need to know is that miners need need to make a profit, and the higher fee they pay the sooner a miner can include it in a block. If they pay too low, a miner will need to wait until it's profitable, which may never happen if you pay far below market price.

In the past when I'd worked on it's I'd struggled at producing a difficulty size relation which was well justified, there seem to be an infinite class of functions that works just about as well. "Why this shape and not this other one?".

I think you're right that there are an infinite number of polynomials that work. This is where it seems we cannot divorce the politics from the algorithm. Each polynomial is going to produce a slightly different outcome. I simply chose one as a starting point to illustrate the concept of difficulty vs. size (which I'm glad to find you've mulled also!). I think the key is that anything sub-linear will eventually cause most transactions to become profitable, but with a varying delay. For example, we could make a polynomial that is nearly linear, which would result in the time penalty for lower-fee transactions to be even longer. Linear would be the opposite extreme to what Satoshi chose. I'm not sure we can say that one is correct, but I would point out that Satoshi did chose one parameter, namely constant, and it's at the other extreme. I would also say that we have no reason to claim this one is any more correct, but we do know it has the potential to produce certain outcomes that we're not too excited about... so in other words, we cannot be neutral and say we just won't pick a polynomial, because one has already been picked.

http://i.imgur.com/n5pPYES.png



but I was unable to prove that even with homogenous-cost rational miners if there is a single supply level at which miner income is maximized that the system has an equilibrium there. I think it would be really good to be able to show that.

I actually started down this path, then realized that with anything other than constant (Satoshi's parameters), the population and distribution in the unconfirmed transaction pool is always going to be changing, and therefore the profitability of mining will constantly be changing. As a result, I think you're going to see a whole array of mining strategies.

One of my concerns with this is if these complex strategies would lead to the expectation value for the block time to always remain 10 min, or if it would oscillate. It might produce undesirable oscillations if the natural frequency is too close to 2016 blocks, but perhaps this could be mitigated by calculating the average block size over a longer time span.


because the function you've shown there doesn't obey the invariant that the result should be 1 if s,d = 1.


My bad, I copied the equation wrong from my code. I think the figures were correct. Are the labels confusing? Here is the axis with numbers instead (I was trying to make it more descriptive by adding the "x average"

http://i.imgur.com/UE08LJy.png


As far as implementation goes, ... using floating point in consensus would be an unmitigated disaster. But thats no problem, a reasonable, cheap, fixed point approximation is no problem-- esp. for the limited range this function needs to operate over, a rational polynomal fit will work fine.

Yep... I've revealed that I'm at best an informed hobbyist. I don't know the code, but I did do my best to read about the block structure and such to attempt to suggest minimal changes. I have done a tiny bit of embedded design, so I know how painful floating point can be.