I've jotted down some quick napkin math. Hashrate isn't necessarily perfectly linear to total memory bandwidth, but it is a somewhat reasonable analogue to give round numbers.
RTX 3090* ~935-1008 GBps
RTX 3080* 760 GBps
RX 5700 XT - 480 GBps - 58 mh/s (eth)
RTX 2080ti - 616 GBps - 52 mh/s (eth)
GTX 1660 super - 336 GBps - 30 mh/s (eth)
RX 580 - 256 GBps - 30 mh/s (eth)
I think it would be somewhat reasonable to take a guess at 3080 hashrates in the 60s or low 70s and 3090 in the 90s. They are neither low tdp nor cheap, so to me this doesn't represent a value proposition. But ymmv.
Source for the RTX 3000 card :
https://wccftech.com/nvidia-geforce-rtx-3070-8-gb-official-launch-price-specs-performance/RTX 3090 - 936 Gbps -
75~80mh/s (eth) ??
RTX 3080 - 760 Gbps -
63~67mh/s (eth) ??
RTX 3070 - 512 GBps -
40~45mh/s (eth) ??
RX 5700 XT - 480 GBps - 58 mh/s (eth)
RTX 2080ti - 616 GBps - 52 mh/s (eth)
GTX 1660 super - 336 GBps - 30 mh/s (eth)
RX 580 - 256 GBps - 30 mh/s (eth)
If that's the case, it looks like sticking with RX 5700 XT would be better.
We can't really just linearly correlate hashing power like that. Factors such as the new
compared to the previous GDDR5 versions could offer a boost to hash power and much more, or not. If i had to guess the hashing power boost should be much better.
Anyway i'm convinced to get a 3080 with the new price range. At least one.