By creating a UTXO that aggregates other UTXOs, you would be able to greatly reduce the size of the UTXO set on the blockchain. This could potentially lead to faster and more efficient transactions.
However, this approach may lead to increased complexity in the verification process. In order to verify that a UTXO is valid and can be spent, the blockchain would need to verify not only the inclusion proof of the aggregate UTXO, but also the inclusion proofs of all of the individual UTXOs that make up the aggregate UTXO.
Can you make up a solution to this problem?
All a UTXO is, is a pair of a hex string (in this case, of a 32-byte transaction hash) and an integer which denotes output number (which usually can be represented in just 1 byte with an unsigned uint8_t, though even if that overflows, a 2-byte uint16_t will definitely be enough. Either way, its a numbers problem of other areas.
Suppose you have billions of UTXOs - this will equate to several GB of UTXO data. You can't simply just compress the UTXO set as that will only delay the inevitable.
So perhaps, a shredding algorithm could be implemented, where the oldest set of UTXOs past a particular threshold are ruthlessly shredded from the UTXO set, and spending such a UTXO would require any node to scan for it from the beginning of the blockchain - in other words, we don't cache extremely old UTXOs.
And in case blockchain culling is also implemented - where instead of the Genesis block and the first X thousand blocks, you have a "coinbase transaction" of output UTXOs from those X blocks - there still isn't any fear of UTXOs being wiped out of the blockchain because they would still be there.