Post
Topic
Board Development & Technical Discussion
Re: Proof of Activity Proposal
by
iddo
on 20/06/2014, 21:00:21 UTC
I suppose it will just readjust based on the implicit(explicit?) fraction of BTC stakes, as well as the PoW being done?

Yes, by readjusting the PoW difficulty according to how many blocks were created in a retarget window, we can achieve a predictable gap between blocks, where the new difficulty is derived both from the stakeholders' participation level and the PoW participation level.

I'm always annoyed that in one breathe people are concerned that miners will mine empty blocks, but similarly we shouldn't worry about PoA stakers to not do it.

Stakeholders shouldn't want to mine empty blocks in a malicious attack to destroy the system, as this would diminish the value of their stake. The bigger concern is closely related to the centralization/monopoly risks, i.e. miners cartel that obtains dominance and could then impose their policies (exclude transactions that don't conform with the fees that they impose, etc.). If PoW mining hardware is completely unusable for anything besides the particular cryptocurrency, then stake and PoW hardware would indeed be very similar in this regard. But ASIC/GPU can be repurposed for other uses, in particular to mine other cryptocurrencies, even as part of auto-switching centralized pool.

If we are capping value of blocks, will we have a rule that allows a single transaction block that goes over that limit? Otherwise you wouldn't be able to spend BTC at extremely large valued addresses.

Since the objective is to discourage stakeholders (or miners) from accepting low-fee transactions, I don't see any problem with value cap rule that lets the last transaction overflow. It doesn't have to be a block that includes only a single transaction, for example if the limit is 100 BTC and there are transactions of 70 BTC and 60 BTC (with high proportional fees), then you can include the 70 BTC, then also include the 60 BTC, but then you must finalize the block because 70+60=130 BTC is over the limit.

Would the cap value mean miners will just include tons of dust transactions? Or would the value cap be *in addition* to the 1MB data cap?

It's "in addition", see the paragraph that starts with "There is also a third tragedy of the commons problem" in section 2.1 in the paper. The point is that if there's only data size cap and let's say Alice wishes to send a low-value transaction and Bob wishes to send a high-value transaction, then Alice and Bob will compete for space and whoever of the two who offers the higher fee gets included. With the value cap, users who transact with higher values will pay higher fees (same proportional amount but higher in absolute terms), which is more fair. The data size cap is more controversial because it should accommodate the economy and reflect the wishes of the users. This is easier to see when you consider an extreme example, e.g. if Bitcoin allowed only 5 transactions per block now, then obviously everyone will revolt, hence similarly if the transactions volume increases substantially in the future (due to popularity) then the users will revolt if the 1MB cap remains.


Or are we hoping the size propagation penalty will naturally force this down? Would this also encourage people to split up their BTC into smaller value, increasing the UTXO set? Just random things to think about.

I failed to understand, what is a size propagation penalty? Why split up into smaller values?