Post
Topic
Board Pools
Re: [1450 TH] BitMinter.com [1% PPLNS,Pays TxFees +MergedMining,Stratum,GBT,vardiff]
by
AaronS
on 23/06/2014, 07:26:09 UTC


yes  there are tests,    but  all involve   groups of about 100th to 200th of hash.

  In bitminter :

 group  1        total of  200th
koi ....

Philipma... i agree with you, mostly because I proposed group statistical evaluation and probation.

Do i care if a small miner benefits from bad hardware/software?  In theory, yes, but how does one practically identify and enforce?

Well, one idea is go back to grouping the miners together, and paying out proportional to their contribution if it is outside an acceptable statistical mean.  You could manage it on a rolling "find" basis. So, for example, if a 200 Th group should find 9 blocks (on average) over a one month period, with a deviation of +/- 1 block, but your group finds 6, that would be statistically abnormal... so the group payout would be less. Likewise, if a 200 Th group finds 12 blocks, you could argue they should get more.

Over time, in the law of averages, it should even out.  But if a group is consistently underperforming, then you know you have bad actors (intentionally or not).

It is a completely different way of operating a pool. So, Dr. H needs to buy into this idea and determine whether it is worth his time. 

First off, finding 6 instead of 9 blocks would not be statistically abnormal for any reasonable definition of abnormal.  Second, what is being suggested is making everyone mine in a pool that is only 200 Th/s.  If people wanted to be in pools that small they would join pools that small.  GHash is so popular exactly because people do not want to be in overly small pools.

Yes, over time things would average out, but you could say exactly the same thing about solo mining.  In fact, over time you will make more solo mining than pool mining because you don't have to pay the pool fees.   So why isn't everyone solo mining? Because we want lower variance than we get solo mining.  No one would join a pool with enforced 200 Th/s sub-pools.

As to Philip's idea, if you group users together for statistical analysis you pretty much eliminate any chance of finding a bad actor since the poor results are covered up by everyone else in the group, just as a pool covers it up in general.  Checking users individually clearly makes no sense except for the biggest few.  Even if you could check all the users, with over 6000 workers in the pool we should expect more than 6 workers with CDF of 99.9% or worse.  So how would you define unlucky then?  It is a distribution and you cannot just cut the tail off.  If you eliminate the most unlucky users from your pool there will be new "most unlucky" users.  As someone mentioned earlier, this strategy will result in a pool of just a single user - and even this super lucky user will, given time (of say about 100 blocks found) will also have had a CDF over 99%.