Search content
Sort by

Showing 20 of 24 results by Multipool
Post
Topic
Board Pools
Re: Multipool - the pool mining pool (with source code)
by
Multipool
on 30/06/2011, 07:10:12 UTC
The rpc getwork message does appear, but only when the worker disconnects.
This would be the exact result if nonblocking mode on the socket doesn't work. Blocking is turned off at line 1717:
Code:
$incoming->blocking(0);
Then check out line 1583:
Code:
while (<$c>){ #nonblocking
In nonblocking mode, this statement will slurp a whole line, or as much of a line as the server has received from the client. In blocking mode, this statement will wait for more data from the client until a whole line is available, or the socket is closed. The magic trick is that json rpc requests are not terminated by line breaks! So the loop hangs here waiting for an end-of-line that never comes, until the client closes the connection. At that point, since no more data will be available, the <$c> statement returns, the while loop iterates one final time, and the server tries to send a response through the closed socket. To make this work without nonblocking mode, you would need to use the perl read function on the socket - either read content-length characters after receiving a single empty line (the separator between the header and the content in http packets), or read a single character at a time until you get a valid full json object string. The dangers here is that some pools use chunked encoding instead of content length, and that you must trust your miners to send valid requests. A single malicious request (or even accidental network-problem-related) with a content-length header, but only partial content, will hang the entire rpc server!
Post
Topic
Board Pools
Re: Multipool - the pool mining pool (with source code)
by
Multipool
on 30/06/2011, 05:51:46 UTC
Hm... I'm still left with some questions. One thing is - is it normal that I see *many* rpc calls per second? I mean, calls like this:

Code:
got work 6c1ea670 b5d42a84 from deepbit (t=0.070/0.082 s/g=0/14 old=32/2209 p=0.00)
got work 6c1ea670 259bd031 from mtred (t=0.118/1.508 s/g=0/12 old=3/44 p=0.00)
got work 6c1ea670 dde84185 from btcmine (t=0.154/0.157 s/g=0/8 old=30/3192 p=0.00)
rpc connection opened from 83.26.173.182:52750
rpc authorization from 83.26.173.182:52747: user=--- pass=x
rpc getwork from 83.26.173.182:52747 (deepbit 6c1ea670 dddc356d) queue: 15/103w 0so 0sh
got work 6c1ea670 3128bc4f from deepbit (t=0.082/0.082 s/g=0/15 old=32/2209 p=0.00)
rpc connection opened from 83.26.173.182:33185
rpc authorization from 83.26.173.182:52750: user=--- pass=x
rpc getwork from 83.26.173.182:52750 (btcmine 6c1ea670 c25c0819) queue: 15/103w 0so 0sh
got work 6c1ea670 6eda19aa from mtred (t=0.106/1.368 s/g=0/13 old=3/44 p=0.00)

I only use two workers so I assumed that it would not generate much traffic, and then it does Smiley.

My other question is - will you still be working on this project? I already saw an error in the log while parsing rewards, probably for BTCMine. I've looked at the rewards functions, but they are somewhat complicated and I doubt I'll manage to fix that myself without breaking something Smiley.

A work queue size of 103? Do you see like ten requests per second? That's not supposed to happen - usually it's one request every five seconds per miner. I've seen this before though - what miner version are you using? Does rebooting the server help?
And yes, I will work on the project.

One more thing: I noticed that there is no reward function for bitcoins-lc, even though it is supported in the pools.conf. Does that mean that the script won't be able to distribute payments from bitcoins-lc to my workers?
There isn't a function for bitcoins.lc yet - they don't show per-round earnings and it will require some magic to sort the rounds out.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool (with source code)
by
Multipool
on 30/06/2011, 04:46:52 UTC
I had thrown the 'production' switch, which caused the work queue entries to start out at 30, but was only testing with about 300MHash/sec.  Occasionally (presumably after a new round began on the current pool or something), I would get dozens of invalid shares in a row from that pool.  Decreasing the work queue size seemed to mitigate the issue somewhat.

I occasionally saw work queue 'purge' messages going by in the log, but never noticed a non-zero number of entries being purged during these rejection streaks.

If you feel like explaining it, how is the work queue purging determined?  I know that Multipool-the-server implements Long Polling.  What about Multipool-the-client? 

For anyone running Multipool locally/with a fairly small hash rate, I might suggest decreasing the 'production' work queue size which defaults to 30 queue entries (around line 43) to avoid similar issues.  OTOH, maybe this is a non-issue and I was just having problems with Mt. Red.
Anyone running multipool locally will be satisfied leaving the $production switch set to false. Setting it to true does precisely the things like increasing the work queue size and rewriting all the addresses to external, rather than 127.0.0.1.

The work queue size expands or shrinks slowly to meet demand - it stores about 15 seconds worth of shares. When work is received from a pool with a different prev_block hash than before (substr($work->{data}, 16, Cool), all the work in the queue with the previous hash (including work from all other pools) is purged. About this time is also when longpoll is sent. The exact timing is a bit complicated, because while the pools typically, but not always, all work on the same block, the switching from one block to the next can happen up to 20 seconds apart between pools. So while you might have received a new share (and purged all old shares) from one pool, you might still receive shares with the old block hash from another pool for up to 20 seconds. Most of these shares will be rejected as stale once they are actually solved, so there is still some room for improvement in this area. Also, you don't want to send a longpoll signal ten times in 20 seconds, so you need a way to prevent double longpolls. For now, Multipool only sends longpoll when its own bitcoind reports a block hash change, but this could also be based on a cooldown timer.

I'm trying to set up Multipool on a Win7 machine (yes, I'm a masochist), and things seem to be more or less working.  I can see Multipool connecting to the regular pools and get shares.  I can see my miner connect to it, but for some reason the miner never receives any shares from Multipool.  The miner sits connected, but idle and no errors on either side.  I've played around with ports and I'm sure it's connecting properly (firewall deactivated and such), but I just can't get the miner to get any shares.

I'm a coding 'hobbiest', but I've learned Perl in the last 48 hours for this project.  I understand about 1/2 of what the code does, but I can't find the section that assigns shares to the workers.  It could be a porting issue (had to change several lines to make it work with wget for windows), could be a config error(IP addresses or such), could be who the heck knows what??

I don't know what other kind of information anyone would need to help me, but I'm grasping at straws at this point.
Amazing that it actually runs! I had imagined the biggest problem with the porting would have been the sockets. Perl on Windows possibly doesn't implement all socket features, like maybe nonblocking mode, or the can_read function of IO::Select. Look in the rpc_server_alt function that handles all client connections. There is a big while loop which iterates over all connected clients, reads any request data available up to that point, and then sends response data when sufficient request data (such as basic authentication) has been obtained. Trace the execution of this loop to see what runs normally and at what exact point the execution gets stuck.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool (with source code)
by
Multipool
on 29/06/2011, 21:37:17 UTC
I managed to get it up and running, ...
Edit 2: Resolved that one as well. It required a change in the code, on line 1303 you have to replace multipool.hpc.tw with an hostname you use. It works great so far, thanks!
Great to hear the pool works for other people as well!

I experience one problem. After running the Multipool server for few hours, it slows down. The output in the console slows down and then workers are experiencing idle times. Restarting the server helps. Is this a memory leak, or maybe the log file slows whole thing down? Anyone noticed it?
The pool worked fine for several days at a time with ~30 users connected. What is the cpu/memory usage for Multipool.pl and bitcoind?

There is one correction I should make. On line 520 replace:
Code:
my $pool_name=$ranked_pools[$i]->{name};
with
Code:
my $pool_name=$pool->{name};
. It is some threading race issue I was working on which crashes the pool once a day or so.

I think we'll need to move the pool specific functions ([poolname]_rewards) to the pools.conf file, to be a bit more portable/extendable... Maybe something like a "plugin" system.

Unfortunately I didn't yet code a lot in Perl and porting all this stuff to Python is a task for later (at least for me). Would be great to at least have a proper git repo etc. running!
Instead of solo something like Eligius with PPS + nearly 0 fees would be interesting (if people don't exploit it by withholding winning shares that is...) or maybe even namecoin.
I'll configure the repo properly at some point. The computer I used didn't have git, and I was surprised to find out that github doesn't have any web upload interface that I could see, besides the "downloads" button. The poolname_rewards functions can indeed probably be moved into separate files for easier sharing, along with whatever is in pools.conf presently. Just remember that poolname_rewards is not critical for single miners, since everything necessary for pool hopping is already contained in pools.conf.

Quote from: README
To pay users, create a "main" account, move funds to it, and run:
./Multipool.pl save/`ls -t save|head -n1` getpayouts | tee pay
bitcoind sendmany main `cat pay` payout
From what I understand - I have to manually move the funds from all accounts associated to the pools to "main" account, and then execute these commands? Is there any automatic way to do this? And do you use a cron script for running this periodically?

Also, please post your Bitcoin address in the first post so we can donate to you! Smiley
I was too hesitant to set the payouts on automatic, since that is the one single step that is irreversible in case of any miscalculations. As it is, there are only two commands to run, with a pause in between to allow you to glance over the numbers to see that they are sensible. And moving btc gold from account to account... builds strength. You can definitely automate all these steps in a cron script if you are more fearless Wink. Also, I don't beg, and neither did I do this for btc.

Unless you have a few GH/s to throw at it, I'd disable the solo pool, as it will not give you profits for weeks and just "waste" GPU cycles Smiley.
You misunderstand: the solo pool isn't there to "waste" your GPU cycles on solo mining, it is there to ensure that you do not waste your GPU cycles. It is probably unlikely that you will run into this problem as a single miner, but there have been times for Multipool when enough pools went down/had insufficient responsiveness that Multipool was unable to pull enough getworks to satisfy all the connected miners. If not for solo shares, those miners would have been idling. Solo shares also mitigate the harm that uncooperative miners can do (see "user_lawful" function). In normal operations, the only time you, as a single miner, will ever receive solo shares is at the very beginning, before multipool has had time to connect to the other pools.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool (with source code)
by
Multipool
on 29/06/2011, 05:44:05 UTC
Code:
404 Not Found
edit: I think this 404 was because I missed one edit, where you have something about tor exit.
Oh right, you need an index.html file for the homepage. Uploaded a simple one to github. Comment out the stuff about tor. All the /user/ pages are dynamically generated though.

I'm currently getting the following error in my log when trying to start things up for the first time:
Code:
connection error to solo (p=0.50): 500 Internal Server Error
You need a line for the "solo" pool in your accounts.conf file, listing your bitcoind username and password (same as in bitcoind.conf, but a second time). You could probably use another pool's login info, as long as you change the "solo" address in pools.conf. The solo pool is used to serve getworks when all other pools are slow/unavailable. These requests are blocking though, so make sure your default pool doesn't go down and has good responsiveness.

If you do decide to remove the solo pool entirely, comment out these lines:
Code:
569: # $do_send{"solo"}=$pools{solo} if $solo_queue->pending<$WORK_QUEUE_SIZE/4;
438: # $longpoll_send++ if $pool->{name} eq "solo";
uncomment 439: $longpoll_send++ if (!$switched_pools and $ranked_pools[0] and $pool->{name} eq $ranked_pools[0]->{name});
replace 1531-2:
    if ($lawful){
    $pair = $work_queue->dequeue_nb;
with:
    if ($lawful or 1){
    $pair = $work_queue->dequeue;
replace 1663-4:
    if ($lawful){                                                                                                         
    $pair = $work_queue->dequeue_nb;
with:
    if ($lawful or 1){                                                                                                         
    $pair = $work_queue->dequeue;

Edit:  And you received notice at least 24 hours prior to the pool shutting down on Friday the 24th of June during the evening in whatever time zone you are in.  You posted twice on the 25th and once on the 26th, neglecting both days to inform people of the potential issue?
I don't check every single email account every single day, particularly not on weekends or days that I'm not specifically working on Multipool. Also, Multipool is as much hosted in Taiwan as bitcoins.lc is hosted in the Caribbean.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool (with source code)
by
Multipool
on 28/06/2011, 20:30:05 UTC
There is a "Download" link where you can download Multipool.pl and README, but the pools.conf is corrupted/empty Sad.
Fixed the pools.conf link. First time using github, as you can see.

WTF! Please tell us which host this is. Everyone should stop using it. Shutting down and deleting your instance without notice, that's just unacceptable.
They did send notice. On Friday evening. And the server went down sometime Saturday/Sunday. Since I don't have proof of malicious intent, I am not going to libel them. Just something to be on the lookout for when VPS shopping.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 28/06/2011, 06:45:27 UTC
Amazing, I go away for a weekend, and everything is broken Cheesy! Good job guys!

My apologies to the miners, but if there is to be any blame, direct it where it is due: the VPS host has shutdown and deleted my instance, after they've conveniently misplaced the BTC payment for the next month I sent them last week. They've apologized and offered to put the server back up with an extra month free, but with none of the old data of course. Coincidentally, they've revamped their plan offerings this same week, so for the price I've paid I only get 75% cpu and 50% memory I used to have. My policy in life is to give people the benefit of the doubt, as I will in this case, but it does bear mentioning that I have read (with skepticism) many stories of people being "accidentally" whiped by their host whenever their legacy plan becomes unprofitable. Who knows!

I might restart Multipool later, but I would need to find the time to whip the blank VPS into proper shape. In the meantime, as I have allowed Multipool to get out of my hands, I feel it would be fair if I release the source code now, so that miners who liked Multipool can run an instance of their own. Download the files from github - written in Perl for Linux, but it is probably possible, by the right person, to modify the code to run on Windows as well. To start the pool, you'd only need to create and edit accounts.conf and bitcoind.conf files - see the readme file for instructions. The pool has all multi-user features enabled, but you can ignore those if all you care is pool failover balancing and hopping. See "utility_1" function for specific details of utility of mining in proportional pools.

As for the missing pool data, fortunately, I have... daily... backups of the database. Unfortunately those who have mined in the last day probably aren't in it. What would be the fair way to fill in the blanks - would miners assent if I distribute the last day's earnings to the miners in proportion equal to their previous day's work? Tough luck though for the guy with the thousand pending shares and one confirmed Roll Eyes. There is also some extra money from eligius from the time the pool was sending shares to eligius-us but was being redirected to eligius-eu. Once I have the time for that, I'll try to see whom that should be paid out to.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 26/06/2011, 04:34:13 UTC
How hard is it to add new pools to this setup? There are multitudes of tiny proportional pools shooting up like mushrooms, waiting to be picked!

If you could abstract the info for pools in a config file, we might even be able to fill these out for you, so you just have to enter that information...

Only problem might be, that these pools then might feel like being DDOSed, if we have a certain hashpower already within multipool (which I think we do).
The pool code is largely modular - I do have a pools.conf file Wink - and in addition there is some pool-specific code sprinkled around mostly for specific load balancing. It is indeed easy to write a regex. It is more difficult to write a routine that can handle minor formatting changes in the webpages, unexpected data (each pool handles invalid blocks slightly differently, and they are rare enough to make it difficult to see an example), nonsensical data (the json api of some pools occasionally returns values that I can pretty surely say are incorrect), and potentially malicious data. Moreover, the routine must fail graciously if it does fail to parse. Case in point:
Btw the multipool's website is down for me, while the mining looks working fine.
The cause of this particular problem was that apparently bitcoind can on random occasions timeout on rpc requests, even though it's running locally (what is it so busy doing - resorting its database, or maybe thinking up of ways to use up even more memory?). A getdifficulty request during the generation of a webpage doesn't return a number, and the division of the user's round shares by the difficulty to get the efficiency value results in a division by zero and crashes the website thread. So now not only do I have to mistrust and recheck the data the pools send out, I cannot even trust and have to sanity-check my own bitcoind.

I am currently testing more pools and will put them up once I'm satisfied enough. If I were doing it alone, I wouldn't need so many checks - as long as the miners are running, nothing bad can happen. But with actual users, someone is certain to complain the moment a round is marked with "zero" earnings, or if the database were to become corrupted and had to be rolled back, or things like that, so I must give due diligence.

I would like some advice on what to do with all the small proportional pools, which do spawn like mushrooms. The problem is that they and Multipool are of comparable size. Therefore, while small pools can be extremely profitable, there are also high risks of (in this order):
  • overloading and crashing the pool
  • being banned for DoSing/hopping
  • the pool collapsing and shutting down, without rewarding the shares
  • the pool being a scam
My opinion is not to mine pools below 100GHash/s size, which does limit the options somewhat.

Quote from: Multipool
On the other hand, I occasionally see shares that are clearly within a block, have valid merkles and nonces, have a valid hash, and are not duplicates, but that nevertheless get rejected by the target pool, even as many other shares of similar age get accepted around the same time. Why does this happen? Couldn't really tell without knowing the internal workings of the other pools.
Related?
http://forum.bitcoin.org/index.php?topic=18567.msg277371#msg277371
http://forum.bitcoin.org/index.php?topic=14483.0
Yes, if there were indeed a bug with X-Roll-NTime which caused duplicate work submissions, the results would be exactly the same as what I'm seeing. I don't have the stats off-hand, but I remember this happening with at least several pools, not just one. If that's the case, then it's up to the pool operators to fix it - Multipool cannot duplicate-check shares that it has never received in the first place.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 25/06/2011, 07:48:26 UTC
Ok, the surge has passed. By the way, if your recent share submission rate is less than 40%, you will only get solo shares Tongue. Don't want to get the pool banned for DoSing getworks with no shares returned - this protection worked very well just now.

I cant seem to connect any more Sad

*edit* back upp. offline for about 2 hours for me there Sad
Server crashed after running out of memory. Spent some time optimizing the database - replaced a large number of hashtables with arrays - cut the memory usage in half. Also, bitcoind needs restarting occasionally - its memory footprint can grow larger with time than that of Firefox 3!

Hello Multipool,

my total efficiency value suddenly jumped way up after today's difficulty change. Is your website script maybe not taking into account that older shares have to have their efficiency measured against the difficulty that was in place when they were submitted rather than the current value?

Regards TeaRex
That's right. I thought I put the variable difficulty checks in place already, but apparently I didn't. The checks are up now, and all your efficiencies are accordingly back down. Are you sure you wouldn't rather be looking at 170% efficiency values, even if only imaginary Cheesy?

I can see now is, that my stale rate is much higher than when I mined on single pool. Used to have it at about 0,5%, now with 52 stales from 3700 shares I'm at about 1,4%
Is this normal?
Yes, unfortunately that's the downside of a mining proxy. With the extra hop in both directions, getworks and shares have greater age, and therefore have higher chance of arriving after a block change and being rejected. Multipool has longpolling to minimize stale shares, but even so it takes time for the pool to be notified of a block update and then notify the miners. Of course, all this takes place within an interval of a few seconds.

On the other hand, I occasionally see shares that are clearly within a block, have valid merkles and nonces, have a valid hash, and are not duplicates, but that nevertheless get rejected by the target pool, even as many other shares of similar age get accepted around the same time. Why does this happen? Couldn't really tell without knowing the internal workings of the other pools.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 25/06/2011, 07:10:53 UTC
Something's odd going on. While majority of the miners are doing fine, 8 of them are hammering the server at 10 getworks per second each and with few/zero submitted shares. They seem to have been doing fine in the past. Is there a new miner version out, or did I mess something up in the miner connection module? I haven't been changing anything in that area.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 24/06/2011, 06:46:55 UTC
Why do I see no earnings for btcguild and for the other pools just after a very long time? Are you taking the funds from the pools manually?
Many pools have a waiting period before rounds are confirmed and rewards are calculated. The period is typically 120 blocks, which is ~20 hours. While btcguild and some other pools (but not all) display stats for rounds pending confirmation, those stats sometimes change midway through (rounds become invalidated), and that was messing with my database. So I only display full "earned" stats only for confirmed rounds for now.

What changed on wednesday in your utility calculations? mtred did really fine till Wednesday, then we started to just send 1 or 2 shares per block.
And what happend to all that eligius shares? That really hurt.
About profit, even if my stats look fine (efficiency 1.1), i havnt earend what i should have earned with standardvalue * 1.1. Not even 90 Percent of it. Maybe all those idle and RPC Probs one reason for that.
If you are comparing payouts to what you estimate what you would have made during the same time while solo/in a single pool, are you including the "pending" shares and the "solo" shares? When Multipool solves its own block, all the solo shares will be rewarded... proportionally Grin.

Still tweaking optimal loads for the pools in rotation. Deepbit and btcmine bans appear to have expired. Also I seem to have figured out how to mine from slush without breaking load constraints. Apparently, slush only remembers the last 24 shares requested per miner, and any submitted shares older than that are rejected. Using more miners on rotation did the trick! Still tweaking mtred (might have set it at too restrictive!) and bitcoinpool.

In addition mtred earnings scrapper has apparently been broken last two days - the mtred status page format has changed and Multipool wasn't able to parse their rounds info. Once this fix is out, the "pending" mtred blocks will get merged into confirmed rounds.

I don't really care about showing utility/shares, but if you could publish the total shares it took the 'victim' pool solve the block our shares are from, that would be awesome. Although prob very hard. Are some of the shares getting grouped together with shares from other blocks?
Fine fine, nagging works. Here's a dump from the database of pool round total shares and timestamps. You can also get this info yourself from the pools' public stats pages.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 23/06/2011, 06:54:32 UTC
NotaNumber (div by zero) just ruin everything! they're contagious and infect everything they contact.
Oh my, how did those got in the database! Eligius must have been sending out some crazy stats if they managed to get through all the layers of sanity checks. Cleaned up the database and slapped another sanity check, just to be sure. Besides that and btcmine glitches, the database has held up rather well over the past week.

The reason for my  Huh is that I'd thought that is what "Efficiency" already represents. Eg: atm if you solo 877000 shares/blocks, or do the same at a proportional pool at "luck" multiplier = 1.0, you expect 50 coins. You can check for yourself that efficiency is calculated as (received coins/expected coins), which means it would also be (utilised shares/accepted shares). But efficiency!=utility/shares - again, you can check this with a few of your own.

I prolly have this the wrong way around, and it'd be handy for stats purposes if it relates to eg. the total shares in that block. Then (I hope to) relate results to Raulo's original paper on hopping and not just calculate hopping efficiency but predict best hopping algos for each pool.
As I see it, utility is related to what the multipool algorithm expects the shares to be worth ahead of time, while efficiency is related to what their real worth turns out to be after the fact, once multipool knows how many coins it will actually collect for them. Confusingly the two are not expressed in the same way, one is expected total worth, while the other is real worth per share (both in relation to the average worth of one share at the current difficulty).

That's what I suggested should be changed, so that you'd have "expected worth per share" and "real worth per share" instead, allowing for a direct comparison. If the algorithm is correct, these numbers should more or less converge over time.
TeaRex is exactly correct in the explanation. The reason I didn't display utility as "expected per share" to begin with, is that each share has unique utility. The database keeps count of the number of shares submitted by each user to each pool in each round, and also of the sum of the shares' utilities in that round. When a round is rewarded, each user receives a proportion of the round's reward equal to the proportion of the user's total utility against the total utility of all users in that round. To me it is easier to think of expected utility in these terms. Couldn't someone write a greasemonkey script if there really is demand for displaying utility as per share, rather than total?

Is Multipool the "crazy miner on eligius US" who has 68% of the hashrate?

http://forum.bitcoin.org/index.php?topic=6667.msg264031#msg264031

(eligius US has been down for the last week, everyone is on eligius EU)
So what's the deal with eligius? Eligius-us is back up, serving stats and getworks, and accepting shares, but it is "down"? I guess I'll keep it out of rotation while it decides its existential problem.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 22/06/2011, 09:01:24 UTC
Why not drop btcmine...
If a pool's efficiency always less than 0.9, even PPS in deepbit is better than that...

Pool: btcmine
Shares: 6079
Utility: 6126.203
Efficiency: 0.627

We cannot drop btcmine for having lower efficiency than other pools. Being a score-based pool, it provides an important role - a baseline expected utility of about 90%. All the times you are mining in btcmine at 90% utility, all the other pools have utilities much lower than that - you would lose more by removing btcmine from rotation. Btcmine also has a very good response rate, so it often fills in extra getworks when a higher-utility pool cannot keep up. Notice that for most rounds in btcmine, the expected utility actually is just about or slightly higher than 100%. Unless you can show with statistical significance that the actual btcmine rewards, and therefore efficiency, are lower than the predicted utility, it has to stay.

An even better alternative would be a constant-utility pool like Continuum, but I am afraid whether doubling its hashrate is such a good idea. Continuum is a good fallback for solo hoppers, but we, as a pool of comparable size, would not gain much decrease in block finding variance by joining up. We would be better off generating our own getworks, which, by definition, have expected utility of 100%... Perhaps I'll increase the priority of generated getworks over pools with utilities below 100% (right now solo getworks only get used as a fallback to keep the work queue running).
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 22/06/2011, 07:55:17 UTC
Another set of payouts is out! Since sendmany transactions are apparently very cheap, the payout minimums have been relaxed to 0.10 BTC daily, or 0.01 BTC after 3 days of inactivity. I could probably even go down to 0.01 BTC daily next time.

Squashed a nasty little bug (wrong variable name at just the wrong place) in the last getwork module update that caused submitted shares to be re-routed to the last pool a user received work from, rather than the actual pool, leading to invalid shares. The final effect wasn't too bad though: the pools quickly fought it out with each other, until btcguild emerged victorious and served most of the shares for the rest of the day while the other pools cooled in the penalty box. With the fix, you should have seen much fewer invalid shares, and better pool rotation.

To anyone complaining about zero efficiency on btcmine - there are three reasons:
  • btcmine rounds are "debited" sometime after they are "confirmed". Efficiency will sit at zero for up to an hour until the btcmine money status page gets updated.
  • Value of shares decays exponentially. If efficiency is low but non-zero, you are probably looking at old shares in a long round.
  • The "money" page is not really synchronous with anything. Multipool makes best effort to guess which round each particular debit applies too, but it isn't perfect. I've checked manually just now, but only found one possible mistake (round 132174 didn't get "debited"). Even manually I cannot sort out how much it should have been worth.
But that's all besides the point because we are apparently banned at btcmine, and also at deepbit, bitcoinpool, and possibly slush. In all four cases it appears to have been automatic rather than intentional. Considering how much workload and DoS some of the pools have been struggling with, it may be understandable if their scripts see 20GH/s worth of requests as another DoS attack and block it. As has been predicted, Multipool is being undone by its own success Roll Eyes.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 21/06/2011, 05:42:05 UTC
Finished a major overhaul of the getwork module to combat all the outages the pool has been experiencing. Getworks are now completely de-threaded and parallelized: the queue gets filled continuously by all the pools, in order of preference (no getworks are wasted, but more are obtained from higher-preference pools whenever possible). The quality-of-service checks have been changed accordingly so that Multipool should no longer manage to ban all the pools it mines from for insufficient responsiveness. As a final fallback, the pool now generates its own work a-la solo mining. Haven't yet decided which reward system to use for the solo shares Cheesy.

All this effort to manage workloads has unfortunately detracted me from the more profitable pool additions - more pool scrapping. As it is, Multipool is being squeezed tight by being forced to avoid zealous automatic DoS defenses, and is really only mining at full strength from half of the pools in the current rotation. More pools in rotation will certainly help things move along. Efficiency has been somewhat lacking lately in comparison to what could be achieved.
You should target Continuum pool if all other pools are dry. It always has 100% efficiency like solo, without all the variance. It will also help reduce the variance of those who mine on it normally, further promoting fair scoring methods.
Continuum pool would definitely be a great addition!

Does the pool also implement the Lie-in-Wait attack, discussed for example here?
That's even too devious for my tastes  Grin. I suspect though that the window of opportunity is narrower than one might think. The shares go stale pretty quickly, I wouldn't want to hold on to one for longer than a minute.

Sure enough, that eligius-eu round ended 40 mins after it started, giving a 4.498 efficiency. I only had 36 shares sent into that eligius round, while in the same time span I sent 98 to btcmine. I'm not sure that this works as good as it is supposed to. Or am I getting it totally wrong? Huh
No, you are right. The reason is that Multipool has been constrained by the rate of getwork requests it can wrest from the pools. At times, some of the pools are under a lot of load and have latencies above 0.5s, ten times the normal rate. With the new parallelized getwork module, the pool should now be able to grab all the shares possible.

As for the fees, honestly, they should really be calculated based off the total efficiency, not for individual rounds. I don't want to just skim off natural variance. The difficulty with that was that since total efficiency varies, the total collected fee could go up or down at any time, and I didn't want to deal with that. Once I have the time, I'll whip up a better fee calculator.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 20/06/2011, 00:37:12 UTC
Aye, the pool was down Sad. With all these users, the pool decided that the getwork response rates for the pool it was requesting work from wasn't fast enough, temporarily banned it, and switched to another pool. But the rates still weren't good enough, so it banned all the pools in succession. I've reset the ban counter and will figure out how to get work faster.

Just thought you might like to know that http://multipool.hpc.tw (including the user stats sub-d) is blocked by "Websense" at my work. None of the other bitcoin sites are. How the hell did you manage to piss them off?  Huh
It is a third-level domain registrar, so some of the other subdomains might have been blacklisted.

I don't really understand the statistics Tongue
Utility seems to me more like an internal scoring method which pool is good to jump into than some "useful" value to caluculate with for people who don't know the inner algorithm.
That's right: the utility is my prediction for the expected utility of each share submitted. A single solo share has utility of 1, but utility of pool shares fluctuates widely. For the stats pages, utility is summed for each round, pool, and user. If your utility is greater than the number of shares, you are predicted to be doing better than solo mining. Efficiency is the ratio of the actual rewards allotted by the pools to the expected rewards of solo mining. In the long term, the ratio of the total utility to total shares should equal the total efficiency, assuming the prediction formulas are accurate.
How are you computing the efficiency of improvement?
There are slightly different formulas for different pools. Some you can figure out on your own, others are more clever.

Why limit payouts after a week of inactivity to .10 min.
If I quit your service and have only given .09 btc of service I've inadvertently gifted you 1.50+(more ATM) ....... WTF?Huh
EDIT:
changed .10 max to .10 min

That's .10 min, not .10 max. If you have 1.50 left, you get 1.50 out. The limit is there because there is no registration for the pool. You don't want someone making up a hundred thousand bitcoin accounts, submitting a hundred thousand shares (each worth 0.00005702 BTC), and costing the pool 50 BTC in fees to pay out 5.702 BTC in rewards.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 19/06/2011, 00:06:19 UTC
I notice that the results from btcmine were changing from 'pending' to being zeroed out, and then only 10 - 30 mins later showing the actual results. This was also affecting the page top stats for a while too. Screen scraping issue?

Here as well, I got 2 rounds at btcmine that are rewarded with 0 out of 5 rounds that are not pending any more - and interestingly they are in between rounds with payouts:
131504   Fri Jun 17 22:35:06 2011   145   179.681   0.00000000   0.000   0.00000000
131499   Fri Jun 17 22:01:12 2011   622   626.707   0.01991538   0.562   0.00000000
131469   Fri Jun 17 19:02:33 2011   93   108.445   0.00000000   0.000   0.00000000
Hope this helps in squashing bugs! Smiley
You are right - btcmine was the most difficult of the pools to scrape, because it does not indicate actual rewards on a per-round basis. The rounds are "earned" some time after they are "confirmed"! I've cleaned up the stats a bit (a couple of the rounds were mis-attributed), see if these are any better. However, there will always be rounds with very low or even zero rewards in score-based pools, because they have very high variability. Imagine: due to the exponential decay, all the shares that are worth 1.0 BTC now, will be worth only 0.00000615 BTC one hour from now - 12 decay periods later. And rounds can last for many hours...

From what I see, in my account as well as on your demo account, slush pool and BTC Mine have very low efficiencies. Maybe this is due that they are using score based calculations? Either way, if it keeps up, I'd remove them for better eff. Other than that it works as advertised Smiley.
Again, the score-based rewards result in higher variability. The expected utility is still above 100% for most rounds. However, since the expected utility of score-based pools levels out at about 90% during long rounds, they are usually the fallback option when all the other pools are also having long rounds (and shares-based expected utility keeps dropping to zero). If this happens too often, I might even have to add a solo pool option and generate my own getworks for the miners during dry periods! Cheesy Being the go-to fallback option, score-based pools are usually not as profitable overall (but still more profitable than their operators would like you to think). It will take more time though to see whether the actual average rewards will match my utility predictions.

why not connect only to 0% fee pools? they are a lot of them Wink
It is integral to maximize the number of pools in rotation. At many timepoints, even those with fees have higher expected utility than any other pool. If pools implement fair algorithms that make the expected utility constant throughout time, then fees will indeed play a greater role.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 18/06/2011, 09:15:36 UTC
First payouts are out!

Question: do you have an idea how long it takes before you start collecting anything?
Multipool has to wait until it collects the rewards from the pools. Since there are many pools with work split approximately evenly between them, it takes that much more time to cross the pools' automatic payout threshold. Although now, with more people, collections should speed up.

This would mean you donate 2.5% to BTCguild - why are shares then not immediately confirmed?
I'm making sure that screen scrapping is working adequately. Once I'm satisfied, I will include the confirmation-pending rounds as well. You do get paid for invalid rounds, as long as they've reached the head of the confirmed queue.

I'm getting many of these after a while.
The server handles requests fine, but some of the pools are running pretty slow sometimes, and the work queue wasn't getting replenished fast enough, even with multiple request threads. Been tweaking this a bit, and have more ideas for improvements for later.

Also, by contributing to this pool, you are ruining the network's security, as massive pool hopping inevitably leads to everyone hopping to the same pool at the same time, creating one giant überpool that is waay over the 50% "safe limit".
Actually, the equilibrium situation where all miners are perfectly rational (and the pools continue to use shares method) is quite different. It makes sense to jump at 43% point into a round only if everyone else continues to mine at the same rate. If you know everyone would jump at 43% point, you will have to jump earlier to maintain >100% efficiency, but then everyone else would jump earlier as well! In the 100% rational limit, no one would ever join a shares-based pool. Pool mining will become impossible and everyone will go back to solo mining again! In the real world, where only 5% of people are rational, I adjust accordingly.

There could be ways around this, such as requesting just a few getworks from different IPs, a small helper program that constantly asks for a few getworks and sends them to the metapool... should be easy to set up and not too hard to find a few "mirrors/nodes" for that. I would happily contribute until all pools finally agree that pool hopping is not a crime but something that is THEIR OWN FAULT!

Also if this pool really gets banned and fought big time instead of solving the issue of pool hopping instead, I hope Multipool just releases the code, so anyone can run it locally in private. Roll Eyes (Edit: Just like any other pool hopper currently does!)
All great ideas! If ip banning does get out of control, I can release a small proxy script for the miners to use to relay Multipool traffic. And if I do grow tired of being a pool tycoon and the mining pools still haven't implemented fair algorithms, I will release the source code, so that anyone can run their own metapool.

even without the pool hopping algo the would provide excellent failover. Maybe it will still be around to provide that when pool hopping is dead.
It looks like it's a nice fail-over, yeah, but in reality it's just another additional point of failure. You're way better running two miners on two different pools locally (with different priorities, of course).
A pretty valid concern. If a person is in search of better uptime, they select the pool with the highest reliability. Multipool would at best only be as good as its own uptime. I would've just released the code outright, but there are already pool mining proxy programs available, and this is just too much fun.
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 17/06/2011, 06:56:32 UTC
... unless Multipool itself gets DDoSed.

I am afraid to think about what would happen if I point Multipool at Multipool's listen socket...  Shocked
Post
Topic
Board Pools
Re: Multipool - the pool mining pool
by
Multipool
on 17/06/2011, 06:20:36 UTC
"Listener for "gpu1 (multi)": 16/06/2011 19:30:00, Problems communicating with bitcoin RPC"
Edit 2: Back up now, seems like I just caught you in the middle of a reboot or something Smiley

That's the failover system in action! Pools have been going on and off line all the time for the past week. Right now, for example, both slush and deepbit are down. For now, I have decreased the leniency for pool errors to decrease downtime durations (it is still more profitable though to mine in a pool with high connection errors and high expected utility, rather than one with no errors and low utility), but will continue to tweak the criteria and will implement round-robin fail-overs for zero downtime.