I'm not sure how you don't know the obvious answer to that question: Because the blocks are larger. Just because the block size limit has been at 1 MB for years, that does not mean that the blocks were actually 1 MB in size. Blocks prior to 2015 and 2016 were usually quite smaller. Note: Downloading these blocks at 2 MB/s isn't the bottleneck, but validating them is. You can speed this up if you add a high "dbcache" setting (e.g. 4-8 GB).
Does that mean validating 10 blocks (each 0.2 MB) takes less time than validating 2 blocks (each 1 MB)? That would explain it, although I don't get why this would be different.
I only have 4 GB RAM so much more dbcache won't work.
Hint: Something like this should probably be discussed in a separate thread.
Probably

I've had this on my mind for a while, and reading about the 20 MB blocks triggered it.