I'm not sure how you don't know the obvious answer to that question: Because the blocks are larger. Just because the block size limit has been at 1 MB for years, that does not mean that the blocks were actually 1 MB in size. Blocks prior to 2015 and 2016 were usually quite smaller. Note: Downloading these blocks at 2 MB/s isn't the bottleneck, but validating them is. You can speed this up if you add a high "dbcache" setting (e.g. 4-8 GB).
Does that mean validating 10 blocks (each 0.2 MB) takes less time than validating 2 blocks (each 1 MB)? That would explain it, although I don't get why this would be different.
I only have 4 GB RAM so much more dbcache won't work.
Yes, essentially.
Your question contains an assumption: that downloading is the only thing that's happening, or that it's the only thing that takes time. It's not.
The signatures of the transactions need checking. That involves finding and reading the blocks where the tx's were ALL minted from (which is not an inconsiderable amount of disk I/O), and then checking that each tx signature satisfies a cryptographic proof that it really originated from those blocks. And Bitcoin has to do that for every transaction in every block since genesis.
The early blocks (1-170,000) barely contain any tx data at all, as no-one was sending much BTC around. Once we get to around 2011 (>170,000), the tx rate ramps up exponentially, and so the complexity of the initial sync'ing job goes up at an even higher rate.