Post
Topic
Board Mining (Altcoins)
Re: Claymore's Dual Ethereum+Decred GPU Miner v4.1 (Windows/Linux)
by
IncludeBeer
on 03/05/2016, 06:19:18 UTC
So I finally added the last 2 GPUs to my mining rig.  Before, I had 4 cards hooked up and everything hashed as expected, but with some "mildly" frequent disconnects from pools (especially suprnova), according to the software.

Last night I added the last 2 390x's, and everything booted and all cards were recognized just fine.  However, once I start the miner, TeamViewer would almost immediately disconnect and I would have a very hard time reconnecting.  The miner reported eth hashrate as expected (~160 mh), but my pool (coinotron) reports only 2-5 mh, which is a fraction of what it should be.  I tried running Genoi's latest release, but the CPU usage jumped to 99% once the mining started, and after a couple minutes I saw TeamViewer disconnecting as well as display driver crashes, so quickly scrapped that strategy.

I'm not really sure what might be the issue, but it seems maybe the wireless adapter I have installed isn't doing what it needs to.  Any advice as to what might be the issue?  For completeness sake:

4gb RAM
Celeron CPU
2x 1k watt PSU
4x 390x
2x 280x
Windows 10

Thanks!

I had a similar issue previously, until I installed a "headless HDMI or DVI adapter" - after that my TeamViewer or VNC connects as per normal as if the remote irg is attached to a PC monitor.

Well, whenever I first start the rig up, I have a monitor plugged into the first GPU, and I still see the same behavior.  TeamViewer only disconnects or somehow seems less stable when I fire up a miner....

Ok, so I'm quite sure my first problem is the wireless adapter I'm using.  Plugged in using an ethernet cable, I get steady connection and have no problems also connecting TeamViewer.  Unfortunately, the spot I want my rig is nowhere near where my modem/router is, so I need to find a dependable wireless solution.  Any suggestions?

2nd problem: Once I wired in to my network, I noticed that after a few minutes of running all 6 cards, a couple of them would reach 94+ temps.  At which point, display driver would fail, and the 3 GPUs I had plugged into my 2nd PSU would turn off (sometimes the others would stay, sometimes the entire rig would just shut down).  So, I guess I need to buy a Kill-A-Watt or similar to monitor voltages from each PSU.

Given I have 2x 280x and 4x 390x, does anyone have a recommendation for how to configure them?  I mean, how many cards into the primary PSU, and how many in secondary PSU (I'm using Add2PSU to connect the 2x 1000 watt PSUs).

Thanks for any feedback!