Post
Topic
Board Mining (Altcoins)
Re: Claymore's Dual Ethereum+Decred GPU Miner v4.1 (Windows/Linux)
by
arielbit
on 03/05/2016, 13:38:52 UTC
So I finally added the last 2 GPUs to my mining rig.  Before, I had 4 cards hooked up and everything hashed as expected, but with some "mildly" frequent disconnects from pools (especially suprnova), according to the software.

Last night I added the last 2 390x's, and everything booted and all cards were recognized just fine.  However, once I start the miner, TeamViewer would almost immediately disconnect and I would have a very hard time reconnecting.  The miner reported eth hashrate as expected (~160 mh), but my pool (coinotron) reports only 2-5 mh, which is a fraction of what it should be.  I tried running Genoi's latest release, but the CPU usage jumped to 99% once the mining started, and after a couple minutes I saw TeamViewer disconnecting as well as display driver crashes, so quickly scrapped that strategy.

I'm not really sure what might be the issue, but it seems maybe the wireless adapter I have installed isn't doing what it needs to.  Any advice as to what might be the issue?  For completeness sake:

4gb RAM
Celeron CPU
2x 1k watt PSU
4x 390x
2x 280x
Windows 10

Thanks!

I had a similar issue previously, until I installed a "headless HDMI or DVI adapter" - after that my TeamViewer or VNC connects as per normal as if the remote irg is attached to a PC monitor.

Well, whenever I first start the rig up, I have a monitor plugged into the first GPU, and I still see the same behavior.  TeamViewer only disconnects or somehow seems less stable when I fire up a miner....

Ok, so I'm quite sure my first problem is the wireless adapter I'm using.  Plugged in using an ethernet cable, I get steady connection and have no problems also connecting TeamViewer.  Unfortunately, the spot I want my rig is nowhere near where my modem/router is, so I need to find a dependable wireless solution.  Any suggestions?

2nd problem: Once I wired in to my network, I noticed that after a few minutes of running all 6 cards, a couple of them would reach 94+ temps.  At which point, display driver would fail, and the 3 GPUs I had plugged into my 2nd PSU would turn off (sometimes the others would stay, sometimes the entire rig would just shut down).  So, I guess I need to buy a Kill-A-Watt or similar to monitor voltages from each PSU.

Given I have 2x 280x and 4x 390x, does anyone have a recommendation for how to configure them?  I mean, how many cards into the primary PSU, and how many in secondary PSU (I'm using Add2PSU to connect the 2x 1000 watt PSUs).

Thanks for any feedback!

I will us 2X280x+1X390 in the main, and 3X390 in the secondary. By doing that, you might have 800W in each PSU.

you will be needing another psu.

for me..

2x 390x per 1000w PSU, your 2x 280x can be powered by a 750w..

i don't own a 1000w PSU..I only have a 850w PSU and the biggest load it is powering right now is an overclocked 390 and 390x..

if you are experimenting and depending on your PSU (quality and performance), I've read that coolermaster v1000 made by seasonic has an allowance to its peak (up to 1100w) and can power 3x 280x.. you can try 1x 280x, 2x 390x per 1000w PSU

regarding the heat, spacing and electric fan is the key, blow the heat away from the gpu's

I have 2x r9 280x and 2x r9 380 on primary: 1000watt and second: 650 watt
I cant run my third 280x. How much am i missing by?

you are saying 650w psu can't run a 280x?..I have a 600w psu and it can power a 280x

check your psu 12v rail, some models have two 12v rails use them both by using a molex or sata adaptor and PCIE to power your GPU