To give you a context, it's like a nefarious entity with a zero-day exploit.
It can always happen. For example: any miner, at any time, can be very, very lucky, and produce the next block hash, with leading 128-bit zeroes, or even with 256-bit zeroes. The first one would prove, that SHA-256 is broken, when it comes to collisions, and the second one for preimages. Every time, when every user tries to make a new transaction, that person could become extremely lucky, and find some serious weakness in SHA-256, and break everything. Every time, when random seed is created, it has some non-zero chance to produce the private key, equal to one.
However, all of those things are unlikely. Mathematically possible, but unlikely. The system is not explicitly protected from things like that, because they can always happen, no matter if classical, quantum, or any other technology is in use, and we can be protected only from attacks, where we know, how to fix things, and how to detect them properly in the first place.
Why make it known that's in the wild by using for a small "reward" if the attacker could wait and use it for a larger reward"reward".
Because the whole system assumes, that the heaviest computing power is in honest hands. Which means, that attacks will be honestly revealed faster, than attackers will start exploiting them. If that's not the case, then it is equivalent to the situation, where someone would suddenly have 99% computing power, and overwrite the full chain. Each system always has some limitations, and Bitcoin is not an exception.
In your opinion, how hard/challenging would it be for developers in general, not just Bitcoin, to patch/upgrade their systems to become quantum-resistant?
It depends on particular attack. For some attacks, things can be patched quite easily. For others, they are impossible to patch. For example, if you have some old embedded device, where you have 64 kB of available space for your software, then how do you want to switch to quantum cryptography, where signatures or public keys would take a significant space? Or: how do you want to send something fast, if some old protocol can handle up to 450 bytes per second?
On the other hand, Value Overflow Incident was patched quite easily in a soft-fork way, and if you reintroduce that vulnerability on top of the latest version, then you will still land in the same chain as today. Which means, that a particular fix depends on particular attack, and you won't reach "one size fits all" answer for this question.
Plus how ready are we if it indeed does become an serious issue?
Every attack, which becomes publicly known, not only can break things. It can also let us understand the world better. Which means, that new things can be built on top of existing software. For example: if for every public key, it would be possible to recover the private key, then OP_CHECKSIG would become just some 256-bit calculator. Which means, that features like OP_CHECKSIGFROMSTACK or OP_CHECKTEMPLATEVERIFY, could be easily re-wired into something, which would use OP_CHECKSIG, and some attack, to reach a given feature.
When SHA-1 collision was revealed, then also hardened SHA-1 version was instantly deployed. And the whole protection is strictly based on known attacks, and nothing else. If someone would attack SHA-1 in a different way, than publicly revealed, then that person could reach SHA-1 collisions, even in hardened SHA-1 version. However, in that case, it would tell us more about its internal construction, so a second hardened version could be made.
Also, Value Overflow Incident was serious. Many blocks were produced, but everything was fixed, before the bad transaction reached 100 confirmations. In case of breaking SHA-256, it could be possible to, for example, instantly halt the chain, by producing a chainwork of ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff.