In theory, on extensive tests, I should see no benefit to mining at diff 2 vs. diff 8 over extended periods of time. But in actuality, I was seeing poorer results in the neighborhood of 100-150mhps. If I am going to lose credit for my current work, I'd much rather lose 2 or 4 than 8 or 16...or 512 or 1024. Granted, those come up less frequently.
If I can come up with a better way of demonstrating it, I will. Maybe there is some other explanation for what I was observing. *shrugs*
my good fury produced:
At work diff 2, effective 2.11ghps.
At work diff 4, effective 2.03ghps.
At work diff 8, effective 1.94ghps.
Why, then, is this happening? I ran it longer than 2 hours on each. I suppose it could be bad luck, so I will try each again.
What you're seeing is one of the reasons I prefer to mine at lower difficulty if I can. It also just appears to me that I'm disproportionately finding the lower difficulty values just at eyeballing the logs.
However the statement made in
If those random hardware errors cause the entire current work unit to fail, then it's best to keep their effect to as small a portion of work as possible.
Let's pretend that work diff of 64 takes about 10 mins, on average, to produce an accepted result. If you have a single hardware error in that 10 minutes, you wasted the entire 10 minutes.
to me made it seem like you were expecting with 50% hardware errors your effective utility halfing if you doubled your difficulty.
Sorry for my misinterpretation of your statement.
There is a chance you could waste that entire 10 minutes block, but 1 error isn't going to guarantee that is what I was clarifying. In fact, it should approach a probabilistically smaller of a chance at higher difficulties.