Post
Topic
Board Development & Technical Discussion
Re: Getting rid of pools: Proof of Collaborative Work
by
aliashraf
on 12/06/2018, 17:45:51 UTC
The Shared Transaction Coinbase is not a part of the Header, its hash(id) is,

All the small proof-of-work solutions have to communicated and calculated before the winning block can be communicated. So that is up to 10,000 (if difficulty target is 0.0001) multiplied by the 64B size of a SHA256 hash, which is 640KB of data that must be communicated across the network. That’s not factoring in if the network is subdivided and miners are mining on two or more leader Prepared blocks, in which case the network load can be double or more of that.
You are mixing up heterogenous things imo:
As I have  said before, Shared Coinbase Transaction is just a transaction with a size as small as 60 bytes (likely, implementation dependent) up to as large as a maximum of 60,000 bytes with normal distribution of probabilities and an average of 30,000 bytes. This is it. There is just one SHA256(2) hash that is committed to block header.
This special transaction is verified by checking the asserted score and reward of each row (from 1 to 10,000 rows out there) by computing the hash of this row appended to previous block hash. There is no need to attach this hash to each row neither in the storage nor in the communication.

As of the need for fetching this special transaction by peers to be able to verify the finalized block, it is very common.
After BIP 152 peers check whether they have the corresponding transaction committed to the Merkle hash of the under validation block, is present in their version of mempool or not. In the latter case,  they fetch the transaction from the peer and validate it.

For ordinary transactions, as I have declared before, the validation process is by no means a trivial process, it involves ECDSA signature verification and UTXO consistency check for each input of each transaction which both are difficult jobs in orders of magnitude compared to what should be done for the (output)rows of our special transaction under consideration, Shared Coinbase Transaction.

For each row of this transaction there is only few processor cycles needed to compute the hash and it is not even the case for all of the rows, just for the rows missing from the memory of the node.

Conclusion: I maintain my previous assertion of zero computation over head and an average of 32 KB block size increase.
Quote
Now I do understand that these proof-of-work share solutions are communicated continuously and not all at once at the Finalized block, but you’ve got at least three possible issues:

1. As I told you from the beginning of this time wasting discussion, the small miners have to verify all the small proof-of-work solutions otherwise they’re trusting the security to the large miner which prepares the Finalized block. If they trust, then you do have a problem about non-uniform hashrate which changes the security model of Bitcoin. And if they trust you also have a change to the security model of Bitcoin.

Easy dude, it is not time wasting, and if it is, why in the hell we should keep doing this, nobody reads our posts, people are busy with more imporatnt issues, no body is going to be the president of bitcoin or anything.

I'm somewhat shocked  reading this post tho.
We have discussed it exhaustively before. It is crystal clear, imo.

First of all (I have to repeat) mining have nothing to do with verifying shares, blocks, whatever ... Miners just perform zillions times of nonce incrementation  and hash computation to find a good hash, It is  a full node's job to verify whatever it should. Agree?

Now, full nodes busy I/O operations, stuff that need extensive networking and disk access,  have a lot of cpu power free and a modern os can utilize it to perform hundreds of thousands of SHA256 hashes without hesitation and any bad performance consequence, just like nothing happened ever.

Is that hard to keep in mind and forget about what have been said in other context (infamous block size debate) please concentrate.

In that debate core team was against the block size increase because they were worried about transaction verification being an I/O bound task, with your share verification nightmare, we are dealing with a cpu bound task, it is not the same issue, don't worry about it.