Hard fork to 2MB, and then immediately start screaming about the next increase to 4MB in ten years.
NO, you add an automatically adjusting blocksize to leave less management into human hands.
Humans are corrupt, and greedy, the only way to solve the block increase is to make the code automatical, to not rely on infighting every 4-5 years.
easy way to achieve this, is to have a 'speed test mechanism' built into an implementation.
knowing in the code you can see the time your node asks for a new block. and when it validates the block it knows the time of that.
so it can use these two milestones to set a 'score'.
EG
if a block can be requested and validated in under say 120seconds. (2minutes) where over say 2016 blocks= target 241920score.
if the node can get a score BELOW(meaning faster than) 241920 over a 2016 block period. then it can easily handle block validation
this score can be part of the nodes user agent/metadata. thus easy to get stats of the network. to see what is capable by the nodes