Yeah, it definitely doesn't take that long at today's block size, I'm just thinking ahead to when they are much larger. I'm trying to come up with a core design that is able to elegantly handle large blocks, if the network gets to a point where large blocks are being used.
Sorry if I wasn't clear It shouldn't take much time regardless of the size because most of the transactions in the block are already validated and waiting in the mempool. Before the software cached that validation it was taking a fair amount of time, but not anymore. Getting most of the validation off the latency sensitive critical path is a big improvement.
Ahh, I see what you're saying. I'll definitely be able to take advantage of that as well, which will allow everything to run at the same speed as the core spent/unspent UTXO update since all the work it queues up to verify will have already been done. That core piece I can run over 10,000 tps (SSD), and all the pre-verification can be done using multiple, separate disks over which the block data is spanned. I'm trying to extract as much parallelism from the processing as I can, and allow for adding disks to increase your I/O speed, if that becomes an issue.