I would be willing to run a full node on a testnet to see if my system could handle larger blocks, i.e. verify a large block in less than the average time between blocks.
I have a question: The total amount work to verify N 1MB blocks is about the same as single N-MB block, right? For example, 32 1MB blocks take about the same amount of work to verify as a single 32MB block, right? Just please ignore the live delivery of blocks for the moment. Or is there some advantage to large blocks where less headers have to be processed. Imagine a full node was off the air for a day or two and is just trying to catch up as fast as possible. What block size facilitates that best?
To me it seems fees tend to be inversely proportional to block size, i.e. with smaller blocks fees rise as folks compete to get into blocks, with larger blocks fees get smaller with less competition to get into blocks. What does it cost a bad actor (if there is truly such a thing in this realm) to clog up the works? I suppose we are looking for the right size of block to cause them to expend their resources most quickly. Make the block size very small and the fee competition would rise high enough to deplete the bad actor very fast; everyone suffers higher fees until they are run out of town (so to speak). Hmm, but if the block size is very small then even when there aren't any bad actors on the scene, regular legit users would be forced to compete. At the other end of the spectrum; make the block size very large and with such low competition fees would diminish. The real question here is what happens to the fees/MB across the spectrum of block sizes.
Is there *anyone* preferring a smaller than 1MB block size right now? I haven't heard of any but you never know. I do think some miners do artificially constrain the block size they produce to like 900KB or so (I'm not sure of their motivation). Even if the block size were increased then such miners could still constrain the ones they produce, right?
A transaction cannot span multiple blocks, right? I suppose the block size creates a functional limit on transaction sizes. Or is the size of a transaction constrained some other way?