^ 13000 tx every 10 minutes would be really cool, and Segwit making leaner txs still resulted in full blocks, I always expect to see this as the gradual way forward -- upgrades that result in leaner, more efficient txs, rather than just widening the bandwidth.
Then again, when I was in the 1990s thinking everyone would focus development on making better compressions for leaner data formats... spent hours on websites making sure they were as small as possible (in bytes). It went the other way (in my view) -- bandwidth just exploded, and people didn't care about efficiency anymore.
segwit has not caused leaner transactions of actual byte counting.. they miscount the bytes.
a 13.5k tx per block would occur by removing the miscount cludge code and actually allow full txdata utility of the entire 4mb space rather than the 1mb tx data+3mb witness.. which is currently hindering transactions to a 1.5mb ability do to the miscount. or where junk data not relating to signature proof of utxo spend is allowed to fill the witness area to take up that 3mb excess.
a full clean standard 4mb blockspace for proper txdata utility+clean efficient signature proof would result in a 13.5k block of 4mb..
its also worth noting..
whe core tried to suggest the cludgy math miscount of bytes for segwit to make txdata more expensive then witness data. they are not incentivising lean tx data. they are incentivising cheap bloaty witness data by 4x.
funny part is asics dont choose transactions for a block so there is no extra computational cost for asics for X or Y type of transactions nor how many transactions appear in a block. so cores economic model of reasoning makes no sense to reality
especially when the computations of a pool manager validating a 2in-2out tx are:
utxoset -2 entry
utxoset +2 entry
for the txdata
yet require more computational power for the scripts:
read utxo entries to obtain spend keys twice
sha complete transactions to get txid twice
edcsa using key to check signature twice
(and other stuff)
so its actually the scripts that cost more computational power for the pool manager(not asics) but proves that witness(scripts) has more of a power cost than the tx data
yet they want to make normal use lean tx data be more costly by a 4x factor than the bloaty witness