Post
Topic
Board Mining (Altcoins)
Re: SILENTARMY v5: Zcash miner, 115 sol/s on R9 Nano, 70 sol/s on GTX 1070
by
xeridea
on 17/11/2016, 21:56:45 UTC
The RX 480 is a tossup with the GTX 1070 on memory access - which is why they're comparable at best in performance on algorythms like the ones ZEC and ETH use despite the GTX 1070 cost being almost twice as much.

Nvidia gets the "scraps" on mining because most mining algorythms don't use most of the parts of a NVidia card that makes it competative with AMD cards on general or compute-bound usage at a given price point, and as a result few folks use NVidia cards to mine on which makes them a much lower priority for development.

 It's not "lack of development" ALONE that keeps Nvidia uncompetative on a hash/$ basis for ETH and ZEC (and derivatives using the same algorythms).
 It's the inherent design of the ALGORYTHMS that keep NVidia uncompetative on a hash/$ basis coupled with the higher PRICE of their cards that have competative memory access even when development IS mature.

It's waaaaay too early to call this based on memory bus width. There is a lot of theorycrafting and it's all based on current hashrates and extrapolating against the original CPU miner code, not GPU optimized code, and not code made specifically for Nvidia hardware.

The only algo that doesn't fully utilize a 1070 is Dagger, Ethereum, which I've mentioned before. Which has lead to a misconception of the capabilities of a 1070... see your post. There are a lot of other algos out there... NeoS, Lyra2v2, Lbry, there are more all of which the 1070 performs quite well in. However, they aren't high volume and as such it leads to statements like what you made... Assuming all of crypto land is just Dagger-Hashimoto. Dagger is the only really memory bound Algo out there, Cryptonote also is, but that's controlled by CPU botnets because of it.

It is the lack of development in Equihash, that's for certain. The only Nvidia optimized miner that has come out was from Nicehash and it was worthless a day later as it wasn't being made by the big three.

The term you were looking for is 'scrypt' and that is where things died for AMD as well.


 ETH and ZEC are both memory-limited algorythms - and are where AMD is currently shining once again.
 NoeS, Lyra2v2, and Lbry don't make much - even with the limitations of the memory system on a 1070 being no faster than the AMD RX 470/480 I still see better profitability out of my 1070s on ETH than any of the coins based on those algorythms.


 Scrypt died for GPUs when the Gridseed and later ASIC showed up for it - had nothing to do with AMD vs Nvidia.
 I wasn't around early enough for the Bitcoin GPU days but it appears that the same thing happened there.


 Also, I never specified memory bus width - I'm talking OVERALL memory access, the stuff that keeps the R9 290x hashing at the same rate on ETH as the R9 290 (among other examples).
 The reason the R9 290/390 and such are competative on ETH and ZEC is that their bus width and other memory subsystem design makes up for their much lower memory speed, but the algorythms used in ETH and ZEC are very much memory access limited more than compute limited (or the R9 290x would hash noticeably better than the R9 290 does - on ETH at least where the code has been well optimised, they hash pretty much identically presuming same clocks and same BIOS memory system mods).

 Do keep in mind that for ETH at least there IS a miner (genoil's) that started out as CUDA specific and is well optimised for NVidia, yet the AMD RX series cards match or better the NVidia GTX 10xx cards on that algorythm on both raw performance AND hash/watt and at a much lower price point.
 This isn't the case as much for ZEC (the code is still getting optimised), but it's become apparent that ZEC is yet another "memory hard" algorythm by design and implimentation that does not reward superior compute performance past the point that the memory subsystem starts hitting it's limits (if not as much so as ETH).


 No, I'm not an "ETH baby" - all of my early ETH rigs were Scrypt rigs back in the day (give or take some cards getting moved around) that spent their time after Scrypt went ASIC doing d.net work (and most of the HD 7750s from my scrypt days are STILL working d.net via the BOINC MooWrapper project).


 I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup, but very dependent on what you're actually working on with a given card. Even on Folding where NVidia offers a clear performance lead, the RX 480 is a tossup with the GTX 1070 on PPD/$ at the card level and very close at the system level, and very close on PPD/watt (less than 10% per the data I've seen at the card level).
 I do NOT see a 40% more efficient benefit to NVidia even in one of it's biggest strongholds.


That is definitely incorrect. Private kernels killed Scrypt mining... ASIC's came along later If you weren't around at the end of 14 you would'nt have figured that out. Not everything is the big bad ASIC boogieman... Sometimes it's just greed and people turning off the lights. You can Google my posts and check them out from BCT in '14. Hence why I'm here trying to motivate some development for Nvidia's side.

"I don't know where you're comming up with NVidia being 40% more efficient than the RX 4xx series - right now it's looking like actual efficiency is more or less a tossup"'

With a lack of coding for Nvidia you're making this statement off of current conditions and rates. Do you think as much effort is going into developing code for Nvidia as AMD right now? The answer is no. You already said no. The efficiency argument is based off of algos that actually use more then memory, not just that but gaming as well. While mining isn't gaming, gaming has been optimized quite a bit over the years. When one brand is getting maxed, the other is as well. Go look up some hardware benchmarks, that's pretty fundamental stuff.

Genoil's miner isn't CUDA optimized. That was Dagger, not Equihash. His endeavours in Equihash are focused on AMD hardware as he owns it. It wasn't until recently that he made a Nvidia compatible miner and it's just a port of SAv5.

Alright, how about some sources for Equihash being hardware memory bus width locked that I haven't seen on BCT and isn't extrapolated from a CPU miner or current rates of AMD hardware. You know Fury also has a better processor then a R9-290? You also know that a RX-480 is basically a mid-range GPU with processing power to match it (close or a bit less then a R9-290)?

Do you also know if you want to check if a algo is memory limited, you can go into GPUZ and check out the MCU (memory controller unit) and see the load on it? Mine sits at 38% at 108sols for a 1070. If we want to take a page from your book and 'extrapolate' from that, that means there is potential there for 284sols on a 1070, that is IF it's completely memory bound and without any sort of optimizing for Nvidia hardware. NeoS also sits around 30% MCU usage. Dagger sits at 100% right before it trades off to more GPU and power usage (if you use a dual miner). Cryptonote also sits at 100% utilization. Weird, all the 'smart minds' and no one bothers checking the gauges.
By similar extrapolation, 480 could do 266S/s (no memory OC).  So slightly slower at half the cost, similar power.  So even with both using theoretical optimal miners, 1070 is still poor choice.  1060 3GB would be reasonable, but still not as good of S/$.

Where are you getting your power usage numbers from? Power efficiency neither matches other algos or gaming benchmarks. You know why so many people are complaining about 'claymore killing my GPUs' because they aren't used to a full power load on their hardware. They based it around silly low numbers, like first releases of Equihash or even Ethereum. As a miner more fully utilizes your hardware, it will start approaching the maximum TDP of the card. While the 1070 and the 480 have similiar TDPs, the amount of processing power available to a 1070 is almost double that of a 480.

MCU usage is sitting at 38% or did you just take my number and use it yourself? How about a screenshot.

my bad
Code:
That is definitely incorrect. Private kernels killed Scrypt mining... ASIC's came along later

it looked that way back then to me i also stopped using GPU to mine any coins when ASIC 256 miners came out if i remember right they came out before ASIC Script miners did so your probably right  and I must have just came around to mining about the time the shit hit the fan and missed more then i thought . i do know the first ASIC Script was made by LKETC it looked like a Grind seed GBlack miner but had more hash power and may of had help making it from some who made the Grin seed but LKETC made the first ASIC Script miner, Avalon made the first ASIC miner of any kind and  i remember when LKETC's came out and shortly after grind seed came out with there version ,and took over the market so to speak then Zeus or all the others including  Scam company's ,which Zeus turned into, but i didn't make the above post i made another one or short version.  

ASIC scrypt miners debuted in 15. Wolf0 is one of the popular private kernel devs that killed Scrypt mining. Back then he only sold to big farms and a handful of them. X11 started coming out around the end of '14, but once again that was specifically private kernels that were dominating that and was unprofitable with public miners at the end of '14. x11 ASIC miners didn't debut until '16, probably in use since the end of '15.

I looked at a ton of different options then, but sold all of my AMD hardware at the time as it was below power costs to mine with said hardware even with pretty cheap electricity. I took a 60% loss to my assets because of it. I would've made a pretty good amount of money on Ethereum with it, but you know hindsight is always 20/20 and it was an entire year before Ethereum came out.

This is going to happen again once Ethereum starts going PoS, which it's already doing. This spring it's going to be pointless to mine Ethereum and all that AMD hash is going to be looking for something juicy to sink their teeth into. Power efficiency is going to start mattering a lot more. Maybe Equihash will turn into the next Dagger, but those are big shoes to fill and we're just at the beginning.
480 and 1070 have similar TDP.  Mining Zcash, their power usage would be similar.  1070 maybe slightly less if you could downclock it, but you can also undervolt the 480.  Even if the 1070 is slightly more efficient with optimized Zcash, it doesn't matter much. I make 9x more on ZCash than I spend in power.  So it isn't worth spending $400 on card that has same speed as $200 card.

38% wasn't from me.  I was using similar method of extrapolation.  I get 160S on 480, no overclocks. ~60% MCU on Claymore 6.0.