Post
Topic
Board Altcoin Discussion
Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin?
by
TPTB_need_war
on 07/02/2016, 17:28:40 UTC
If the odds are great enough then I agree, and that is why I said increasing the size of memory space helps. Example, for a 128KB memory space with 32 KB memory banks then the odds will only be roughly 1/4 (actually the computation is more complex than that), not 2^-14.

No, no, no. Banks operate independently of each other.

Why do you say 'no' when I also wrote that alternative possibility is that banks are independent:

(or can schedule accesses to more than one memory bank simultaneously? ... I read DRAM gets faster because of increasing parallelism)



But each bank can only have one of its 2^14=16384 rows active at any time.

My point remains that if there is parallelism in the memory access (whether it be coalescing accesses from the same bank/row or for example 32K simultaneous accesses from 32K independent banks), then by employing the huge number of threads in the GPU (ditto an ASIC) then the effective latency of the memory due to parallelism (not the latency as seen per thread) drops until the memory bandwidth bound is reached.

However it might be an important distinction between whether the accesses are coalesced versus simultaneously accessed (and thus more than one energized) memory banks (row of the bank) in terms of electricity consumption. Yet I think the DRAM memory consumption is always much less than the computation, so as I said unless the computation portion (e.g. the hash function employed) can be a insignificant then electricity consumption will be lower on the ASIC. Still waiting to see what you will find out when you measure Cuckoo with a Kill-A-Watt meter.

Why did you claim that memory latency is not very high on the GPU? Did you not see the references I cited? By not replying to my point on that, does that mean you agree with what I wrote about you were confusing latency per sequential access with latency under parallelism?



Edit: I was conflating 'bank' with 'page'. I meant page since I think mentioned 4KB and it was also mentioned in the link I provided:

http://www.chipestimate.com/techtalk.php?d=2011-11-22

I hope I didn't make another error in this corrected statement. It is late and I am rushing.

Quote from that link:

DDR DRAM requires a delay of tRCD between activating a page in DRAM and the first access to that page. At a minimum, the controller should store enough transactions so that a new transaction entering the queue would issue it's activate command immediately and then be delayed by execution of previously accepted transactions by at least tRCD of the DRAM.

And note:

The size of a typical page is between 4K to 16K. In theory, this size is independent of the OS pages which are typically 4KB each.

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.