On normal Radeons at stock clocks and voltages, and are not overheating, have a known error rate (which is something like 1 error per several hundred million instructions)
One error per several hundred million instructions... where are you getting that information from? (an honest question, I'd like to learn more)
A 58xx series, from information I can find, can do between one and four instructions per clock. If it is clocked at 775000000 cycles per second (775 MHz), that's up to 3.1 billion instructions per second. So according to your information there would be many errors per second.
When you talk about hardware errors, are you sure you're not talking about
rounding errors?
a silicon chip is not supposed to have errors, and will only occur if: incorrect voltage, incorrect temperature range, the silicon chip itself is faulty, something extremely rare such as a cosmic ray bouncing off and [in the case of RAM] 'flipping a bit' perhaps once every few weeks
Hardware errors cause things like bluescreens, frozen machines, display driver crashes and the like. You would have to get
REALLY lucky for a hardware error JUST SO HAPPEN to only effect something that isn't important, like what color this or that pixel is, or something mundane like that. Most likely if there's a hardware error the odds are it won't happen on a piece of data where it doesn't matter, it will probably crash the system.
What I can find on cosmic ray bit flips:
http://www.zdnet.com/blog/storage/dram-error-rates-nightmare-on-dimm-street/638http://lambda-diode.com/opinion/ecc-memoryhttp://www.cs.toronto.edu/~bianca/papers/sigmetrics09.pdf