Let me begin by joining the chorus of miners praising this effort. I spent a week working towards the same goal before finding this thread. I think you've probably saved me a month of evenings developing something similar. Sending some hashes your way as I test!
I'm using an msi z170a m5 with 4 Asus GTX 1060 6G GPUs on an 850w PSU. I'm also running a single gtx 1060 on a muuuuch older board Asus p5n-d (yes that old) and nvOC still works like a charm.
One thing I've noticed with nvOC versus my own Linux install is that I can't seem to get the same hashrate in nvOC. Even if I set the pl to 140 for the card, push memory all the way to 2000 and the GPU clock offset to 130, I can't break 21 MHS in nvOC. Comparatively, if I just load Ubuntu 16.04 with xorg, gnome and latest drivers with Claymore dual mining ETH and SC, I can sustain 23 MHS for days. Same if I put a card on my Win10 box and use MSI Afterburner to contro OC there. I've tried throwing configurations at my nvOC node manually with nvidia-settings -a and it just doesn't seem to get beyond 20 MHS. I can open the nvidia control panel and verify all the settings took. The only difference I can find so far between nvOC and my own build is that I was still using 375.66 version of the driver and nvOC appears to be using 378.13.
Anyone have any thoughts or suggestion of what else I could try? I'd love to switch all my gear to this but right now I'm getting better hash rates out of a 1060 in win10 and 1060s on Ubuntu not using nvOC. And across multiple 8 GPU rigs, the delta adds up.
Thanks!
From most data sets you will get slighly better hashrate from Windows still, even some 1070's I have seen pushing 31-33 while on Ubuntu I have only gotten 29-31. The biggest advantage though I will say is stability, Ubuntu has shown more constant and stable hashrate by far then any Windows PC (this is beyond the Win 10 auto update on home edition)
Agreed on Windows vs Linux mining. I've observed greater stability in Linux. However, comparing a fresh Ubuntu install to nvOC, I still observe a 3+ MHS difference per card per node. Using nvOC is like taking a card out of every one of my nodes. I'm continuing to investigate, b/c this is a great offering. Only other things I've noticed so far: 1) the xorg.conf on both my test machines lists the cards as 1050s instead of 1060s. Also, my multi GPU test node seems to have a problem with any windows on the console showing up off monitor. My working hypothesis is that it's b/c I plugged in a DVI monitor rather than HDMI but that seems pretty thin to me. Anyone else run into that issue? Google searches so far have not been fruitful.
Nvidia is weird; I did use a 1080p hdmi monitor when building nvOC, so it is possible that it freaks out with DVI. Yes, I know HDMI is essentially; DVI + digital audio. However, there are about 14 different types of DVI; they are not all interpreted the same as HDMI on linux (primarily if they are analog DVI).
The GPU title is meaningless in any systemic capacity in the xorg.conf; so its not a problem that it doesn't list 1060s there.
Different drivers will have different OPT clocks; I would try my suggested clocks and mc bumping: if you still don't get comparable hashrates then I would use whatever OS gives you the best results.
I will make a 375 driver version eventually (it is on the list); but I am trying to focus first on helping users resolve problems, then with implementing the most requested improvements first.
Understood with regard to card titles. Looking through oneBash I saw some logic that appeared predicated upon the title and took a guess without testing it. I normally use HDMI for everything but have an OOOOOLD VGA/DVI only monitor mounted above my test rack. Will test out with a more current monitor and report back my results.