I just came back to this topic, and found myself wrong about the issue OP has brought to us, I was sure the special stacking technique used to generate block locator, insures efficiency but OP is questioning DoS vulnerability by arbitrarily stacked void values.
And OP is right! Code directly serializes
getheaders message into
block locator and exhaustively tries to locate all of the addresses. The most naive approach ever. I'm just surprised

Then, I see another weird thing, @GMaxwell shows little interest to Op's review and asks for kinda
Proof of Labour or something from OP's side.

Looking up an entry is O(1) -- just a trivial hashing operation and one or two pointer chases.
So basically what you're saying is that you can make the node do a memory access per 32 bytes sent, but virtually any network message also does that. E.g. getblock .
Locators have no reason to be larger than O(log(blocks)), so indeed it's silly that you can send a big one... but I wouldn't expect it to be interesting. Alternatively, you could consider what is the difference between sending 100k at once vs 20 (a totally reasonable number) many times? Only a trivial amount of network overhead and perhaps a couple milliseconds of blocking other peers whos message handling would otherwise be interleaved. If you benchmark it and find out that its significantly slower per byte sent then other arbitrary messages I'd be interested to hear... but without a benchmark I don't find this interesting enough to check myself.
Being "interesting" or not, this IS a DoS vulnerability and could be easily mitigated by enforcing all or a combination of the following controls on the light clients'
getheaders request:
1- Check the length of the supplied
block locator not to be greater than 10*Log
2(max_height) + a_safe_not-too_large_threshold
2- Check the difficulty of the supplied hashes to be higher than or equal to some_heuristic_nontrivial_safe_value
3- Check first/every 10 hashes to be reasonably close in terms of difficulty(they are supposed to be).
4- Black list spv clients who send malicious
getheaders requests in a row.
As of other network messages (like getblock): They are not good candidates for such an attack because of very short period of time that
lock is hold.
There are a billion and one things that are conjecturally slow, but most of them are not actually slow (and more than a few things that seem fast are actually slow).
I don't agree. Any algorithm/code can correctly be analysed and optimized/secured accordingly. No magics.
... Anyone can speculate, testing is what is actually valuable.
Code review is valuable too and has inherent premium compared to tests and benchmarks.