Search content
Sort by

Showing 7 of 7 results by Mengerian
Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
Mengerian
on 18/08/2015, 16:27:15 UTC
To further elaborate my thoughts:

The functions of a full node are to provide both monitoring and validation.

Monitoring the state of the network should be the first function of a node. In order to do this a node should a able to keep track of transactions and blocks that may follow differing consensus rules, and detect competing blockchain forks. In the case of block size, the software should keep track of any block regardless of size, subject to its technical ability.

Current software, both XT and Core, do not provide the greatest monitoring, since they only keep track of the longest blockchain that follows the one set of consensus rules they implement.

Validating transactions comes after monitoring, when the node operator can choose which transactions to accept, which to relay, and which proof of work blockchain is considered valid. Trying to stay in sync with network consensus will be of primary importance here.
Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
Mengerian
on 18/08/2015, 15:28:31 UTC
Peter R / awemany,

The user-configurable software blocksize limit proposal got me thinking a bit about what features would be best for a full node implementation.

I like to think about individual incentives, what individual actors in a system may be motivated to do, then after extrapolate out to see how these choices would affect the whole system. In fact, the realization that Bitcoin could work in an environment where all the participants follow their individual incentives was what interested me in it in the first place.

So what do I, individually, want in a full node implementation? I am not a miner. I run a full node (on my crappy 6-year-old laptop) because I want to be able to monitor transactions on the Bitcoin network and make sure they are valid. So to be able to best do this, the software should:
1) Keep track of the longest proof-of-work blockchain
2) monitor other branching proof-of-work chains and let me know about them
3) detect what what consensus rules are being followed in any branching proof-of-work chains.

Software that does this would allow an individual to make informed decisions about what transactions they are willing to accept, and help them mitigate the risk of falling out of consensus. It could also possibly enable more sophisticated actions such as speculating between competing blockchain forks.

So if everyone else also followed this behaviour, what would happen? It seems to me that the overall system would end up being more robust, with less risk of accidental blockchain forks, and any forks that do occur resolving quickly to global consensus again.
Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
Mengerian
on 14/08/2015, 05:08:54 UTC
I'm actually quite excited about this idea.  It has a sort of inevitable feel to it.

Yes. Since anyone can run any software they want to interact with the Bitcoin network, this idea does seem like a logical development.

It also seems like one of those counter-intuitive anti-fragility things, where the seeming chaos and instability at a micro level will actually lead to a more predictable and stable behaviour at the macro level.

If it became more common for individual nodes to be able to tweak consensus parameters, then I think that would actually lead to more predictable and stable consensus behaviour in the long run. The worst thing that can happen to a node operator is to fall out of consensus with the rest of the network, so individual node operators would be strongly incentivised to develop methods to ensure they can track the status of the network, and deal with any potential consensus forks.

As it stands now, consensus behaviour is based on the specific implementation details of Bitcoin Core. The software is not designed with the assumption that hard consensus forks are a likely event, and when they do happen nodes are not designed to handle it gracefully. The accidental hard form of March 2013 happened because of an obscure implementation detail in the Core software, and was only possible because the software monoculture created a "single point of failure". A more diverse implementation of consensus rules might result in more frequent consensus divergences and orphaned blocks, but each one would be non-catastrophic, and would lead toward a more stable and resilient network in the long run.
Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
Mengerian
on 11/08/2015, 23:55:22 UTC
Quote
Anyways, to really understand what happens when R->0 I think we need to make a new model that takes into account what we just learned from your chart above (that miners won't necessarily be hashing all the time).

That seems to be an interesting point illustrating how the best interest of users and miners incentives could diverge.
For users, an empty block is always better than no block because it adds work to the chain and increases the security of the previous transactions.

Miners being financially motivated to shutdown for a period of time may not be an issue though.

Let's say that coinbase rewards are zero and miners live on fees only, and given a difficulty level it does not make sense to spend electricity until X number of fees/transactions are published.

In such a situation the difficulty will adjust/decrease until 10 minute blocks are restored. This might mean that after a block is found miners turn off for 5 minutes and only turn on after 5 minutes of fees are sent, but the difficulty will have adjusted so that miners are likely to find the next block 5 minutes after turning on. Yes this would also mean that if all miners keep running we would have 5 minute blocks, but they wouldn't be. And if they did then difficulty would adjust back up.

Since these issues develop slowly, I believe we would see that difficulty will continue to adjust to maintain 10 min blocks regardless of the financial incentives of the time.

Yeah, I was thinking about this too. Interesting to imagine a global network of bitcoin miners switching their hashing farms off and on as transactions build up.

Although we could expect the 10 minute block period to stay the same, it seems that the variance in the time between blocks should go down.

As you say, for the first few minutes after a block is published, very few miners would hash until a certain threshold of transaction fees could be reaped, so very few blocks would be found soon after the previous one. At the other end of the curve, the further beyond 10 minutes that a block is not found, we could expect miners to throw every last hash at the block, willing to burn lots of power to get the chance to collect the richer reward from blocks with more transactions than average. So the chances that a block takes much longer than 10 minutes would also decrease as miners frantically burn energy in the hopes of earning richer blocks full of fees.
Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
Mengerian
on 11/08/2015, 04:56:28 UTC
Peter_R:

Great paper! The last paragraph got me thinking though, and I think I have come up with another way of visualizing the system that could generalize the result to the case where block reward is zero.

Mempool Demand Curve:

I think it simplifies things if block reward and transaction fees are treated as the same thing, where the block reward is simply treated as a transaction with a large fee (ie, coinbase transaction would be a very tall skinny triangle). The block reward can then be included in the mempool demand curve, causing it to pretty much start at R instead of 0.

Block Space Supply Curve:

Again, when considering the revenue per block, I will combine the reward and fees (R and M) and call it Mrev. The profit equation then becomes:

Profit = Mrev (h/H) e^(-τ/T) - ηhT
(Sorry for the rudimentary looking equation, I'm not sure how to enter it properly here)

In your paper, you base the supply curve on the "neutral profit". The problem with this is that the analysis breaks down when the block reward is zero. Instead of assuming a profitable empty block, I will simply solve for the total revenue (reward plus fees) needed to yield a profit. So, to get the block space supply curve, simply set the profit to 0 and solve for M_rev, yielding:

Mrev = ηHT e^(τ/T)

Similar to your formulations, this curve will curve upward as block size increases, but will intersect the y axis at ηHT. So for a miner to profit by mining an empty block, the block reward must be greater than ηHT.

Plotting the Supply and demand curve together looks like this:
(I have normalized both curves to a single miner’s point of view by multiplying by h/H)

http://i.imgur.com/mxgW7UE.jpg

And the miner’s profit is the distance of the revenue curve above the cost curve.

The nice thing about plotting it this way is that we can also consider what would happen if transaction fees become a significant source of revenue and the block reward is not sufficient to profitably mine empty blocks:

http://i.imgur.com/Cd1Anik.jpg

We can see that in this case, it would only make sense for the miners to include enough transactions to be in the region of the graph where revenue exceeds cost. This would also work in the extreme case where R=0.
Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
Mengerian
on 30/06/2015, 20:22:12 UTC
remember that, if i'm right, and full blocks are indicating additional incremental demand due to currency crises, this is going to be a more regular thing.  this is what we've theorized about for years.  the network HAS to be ready if you want to Moon.

1MB isn't going to cut it.  right now, unconf tx's are now up to 10000 with now >5TPS delays.

To be clear, I agree. Smoothly adjusting fees are another tool in the scaling toolbox, though, and should be ready for use as well. Where we're going, a 8 or 20x increase all by itself might not cut it. I'm eyeing a 500x price increase for the next bubble (peak, crashing afterward to 5-10x lower: ~$10-20K/BTC), which will mean a pretty big increase in transaction volume and may overrun even the 20MB blocks if we don't even have a fee market up yet.

We need as many scaling solutions in place as possible to enable the next bull run.

The nice thing about this debate is it's incentivizing them all. I'm confident we'll have the blocksize cap increase, hopefully in advance and not as a reaction to a stalled rally, but also a lot of other things. It's worth pushing against the small-blockers not because they have any real power to stop the increase, but because it lights a fire under them to do their part in making those optimizations to try to prove we can get by with smaller blocks. It's a futile effort on their part, but it ends up helping with large blocks, too.

Yes, it seems that increasing the blocksize limit is almost inevitable. No one can "centrally plan" bitcoin, at the end of the day anyone can run any software they want to interact with the network. It would be trivial for someone to branch off the source code using git, and release a version with Gavin's patch applied, or BIP 100, or some other change. Using git this version can easily track all the other developments, and all changes are well-defined and change history is cryptographically verified.

Since no one can apply top-down control over the network, we have to look at the incentives of the various participants. If miners, node operators, merchants, etc want larger blocks, then they would have an incentive to run software that allows a transition to larger blocks, and indicate that they are doing so.

The most overriding incentive is to stay in line with the network consensus. But if miners start mining blocks indicating that they implements BIP 100 or BIP 101, then network participants can indicate that they will accept larger blocks contingent on most others also accepting them.

In the longer term, it would also be good if this debate fosters a more diverse software ecosystem, which would allow other protocol changes to be decided by the market in this manner.
Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
Mengerian
on 25/06/2015, 17:24:42 UTC

I've come to see a single core as being the single greatest threat to Bitcoin out of all of this. A better situation to me would be a bitcoin P2P network with several separately developed cores that adhere to a common set of rules. The upgrade path is then always put to a vote. Any one core could then propose a change simply by implementing it (with a rule that after x% of blocks back the change it becomes active).

Then if people like the change, more will move to that core. This in turn would cause the other core to adopt the change or lose their users, and that is how consensus is achieved. If a majority did not like the change, they would not move to that core, and the change would never be accepted.

At no point in this do any set of gatekeepers get to dictate terms. Since no core has a majority of users captured, change would always have to come through user acceptance and adoption, and developers would simply be proposers of options.

Yes, in the long run a network composed of many parallel implementations would be the most robust and anti-fragile.

It strikes me that contentious consensus decisions such as this could serve as impetus for these parallel implementations. If miners and node operators "vote" on various hard- and soft-fork decisions by choosing to run different forks of the main client, then a more diverse software ecosystem could emerge. This should also be a less contentious way of coming to consensus, everyone simply choosing the client that will implement the policy they favor once a supermajority of the network agrees.

This would probably also require something better than "version number" for clients to communicate the consensus rules they are willing to follow. If several consensus changes are being considered simultaneously, then some sort of more fine-grained indication would be needed.