ok, we've all been lead to believe up to now that validation of tx's had to occur twice by full nodes. first, upon receipt. second, upon receipt of the block. this was crucial to the FUD scare tactic of decrementing
[...]
what am i missing?
Where I expicitly pointed out to you in many places, in excruciating detail, that this was not at all the case??
https://www.reddit.com/r/Bitcoin/comments/39tgno/letting_miners_vote_on_the_maximum_block_size_is/cs6rek5?context=3 You seemed so happy to argue with it before, has your memory vanished now that you don't think it would be convient for you?
what's interesting is that we've never seen it done to the degree it is now. we had the Mystery Miner a few years ago but he stopped it pretty quick. also, despite many upgrades added to the protocol previously, we've never had a fork as a result of SPV mining before either. what's different this time is the consistently full blocks and the fact that Wang Chun told us they create SPV blocks in response to large blocks as a defense. it seems they consider full blocks large blocks so the excessive SPV mining created last nights fork in light of BIP66 and the upgrade to 0.10.x. so in that sense, the 1MB cap is the direct cause of what is happening.
The incohearence in some of these posts is so foaming so thick that it's oozing out and making the floor slick; careful-- you might slip and mess up your future as "the LeBron James of the Bitcoin world" (as your attorney
decribed you (18:30), under oath, to a federal judge as part of
litigation related to your possession of 3000 BTC taken primarly from members of this forum.).
As miners have created larger blocks F2Pool expirenced high orphaning (>4% according to them); they responded by adding software to mine without transfering or verifying blocks to avoid delays related to transfering and processing block data. Contrary to your claim-- the blocksize limit stems the bleeding here. Their issue is that large blocks take more time to transfer/handle and that they're falling behind as a result. Making blocks _bigger_ would not help this problem, it would do the _opposite_. If a miner wanted to avoid any processing of transaction backlog they'd simply set their minimum fee high and they'd never even mempool the large backlog.
Reasonable minds can differ on the relative importance of difference considerations, but when you're falling all over yourself to describe evidence against your position as support of it-- redefining F2pools crystal clear and plain descption of "large blocks" as their source of problems with the technically inexplicable "full" that you think supports your position, it really burns up whatever credibility you had left. That you can get away with it in this thread without a loud wall of "WTF" just shows what a strange echochamber it has become.
1. Why do larger mining pools have less orphans, assuming most miners even small ones are connected to the relay network?
2. Even if mining pools set higher fees, aren't the unconfirmed TX's still added to their mempools?
3. How is it that 1MB just "happened" to be the magic number at which blocks are deemed to be "large" ?