What I meant was that big spam transactions would cost them an amount of hashing power that they - if they ignored these transactions or give them a very low priority - could use better to find blocks faster and get an advantage over their competitors.
Well, saying that it would "cost them" differently is also incorrect. If you have two transactions:
1) native-to-native which is part of a big group of spam with X fee.
2) native-to-segwit or segwit-to-segwit which is a genuine transaction with a fee equal to X, it doesn't matter much for the miner. It costs them the same amount.
from the point of view of asics makes no difference.
from the point of view of pools. the pools would have validated a tx before putting it into mempool.. so putting it in a raw(unsolved) block 4xx,xx1 minutes later or 4xx,002 makes no difference to any CPU time of forming a raw block to get a hash to send to asics to solve.
emphasis the quadratic/cpu intensive time only happens once for a pool. when it first gets relayed a tx and validates it to add it to mempool.. the creation of a raw block is just collating data minutes later. not revalidating tx's again
the choice of what gets into a raw block is more about preference. some pools (btcc) love their own internal customers tx's get in fee free. other pools want expensive first. and some pools want to distribute mature rewards to all the external miners first.
some pools want to waste other pools time by making spammy blocks so the first pool can concentrate on the next block while their competitors are hanging validating the first block
also segwit is "supposedly" 75% cheaper. which means pools get 4x less bonus from a segwit tx.
theres also issues of if they add segwit tx's they have to form the 2 merkle. and then have some peers request the pool to strip it down to just the base block..(old nodes connected to pools)
* however some pools would not treat a $0.25 tx as having higher priority than a $1 tx purely because its segwit
its taken years of debate and still no guarantee on moving the block size once.. do you honestly think moving to 1.2mb is going to benefit the network, and then have another few years of debating to gt 1.4mb..
There is no debate. I have already mentioned that this would be done with 1 hard fork, so the subsequent rises (1.2 to 1.4 to 1.6 and so on) would be hard coded.
if your talking about progressive blocksize movements that are automated by the protocol and not dev decisions per change.. then you are now waking up to the whole point of dynamics.. finally your looking passed blockstream control and starting to think about the network moving forward without dev spoon feeding . finally only took you 2 years (even if you think that hard limiting it at silly low amounts is good)
I am not strongly interested in hard fork proposals until I see someone coming up with solutions for the sigops problem.
very simple keep sigops at a REAL 4k or below 4k per tx.
P.S if segwit went soft first and then removed the cludge to go to 1 merkle after. that means removing the 'witness discount' which then would bring back the quadratics risk of REAL 16k sigops (8min native validation time)
**(disclaimer their is bait in my last sentence i wonder if you will bite)