ZEC is core count and speed dependent so Nvidia does better at it. ETH is memory bandwidth and speed dependent and so AMD's lower core count/speed does not impact it much. The memory numbers you see reported as 7000 or 8000mhz are just 4x of the actual memory clocks of 1750-2000mhz. Both AMD and NVIDIA use essentially the same GDDR5X memory - similar chips similar clocks.
No, AMD does not use GDDR5X - they're using GDDR5, and will be moving to HBM(2) instead.
My bad, I meant GDDR5, not GDDR5X. Edited it now.
The RX 4XX/5XX use GDDR5, same as the Nvidia 1060/1070's. The Nvidia 1080/1080ti's use GDDR5X memory, which is different than GDDR5.
Thats correct, so what I essentially meant was Rx 470/480/570/580 and GTX 1060/1070 use the same/similar memory.
Which brings us to the question - why is GDDR5X worse than GDDR5 at mining performance? Is it because none of the miner code is written to effectively utilise the faster prefetch?