Right, Ok so my misunderstanding on this scaling debate comes down to one question.
Assuming vast quantities of transactions, and no restrictions on block size. Where is the primary processing/storage bottleneck? Is it in the temporary size of the mempool, (pre processing), or the permanent size of the blockchain? (post processing)
Bottleneck is definitely storage, not processing with current technology. There are some things in the pipeline that should help quite a bit with fast access to large data sets that should begin rolling out over the next two years.
UTXO is the larger concern. However, not all transactions increase the UTXO set and some decrease it (lots of inputs to a single output).