Post
Topic
Board Altcoin Discussion
Re: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin?
by
TPTB_need_war
on 07/02/2016, 20:36:26 UTC
Example, for a 128KB memory space with 32 KB memory banks

Hard to argue with someone who either confuses terms or whose numbers are way off.
You have at best a few hundred memory banks.

Quoting from http://www.futurechips.org/chip-design-for-all/what-every-programmer-should-know-about-the-memory-system.html

Banks

To reduce access latency, memory is split into multiple equal-sized units called banks. Most DRAM chips today have 8 to 16 banks.

...

A memory bank can only service one request at a time. Any other accesses to the same bank must wait for the previous access to complete, known as a bank-conflict. In contrast, memory access to different banks can proceed in parallel (known as bank-level parallelism).

Row-Buffer

Each DRAM bank has one row-buffer, a structure which provides access to the page which is open at the bank. Before a memory location can be read, the entire page containing that memory location is opened and read into the row buffer. The page stays in the row buffer until it is explicitly closed. If an access to the open page arrives at the bank, it can be serviced immediately from the row buffer within a single memory cycle. This scenario is called a row-buffer hit (typically less than ten processor cycles). However, if an access to another row arrives, the current row must be closed and the new row must be opened before the request can be serviced. This is called a row-buffer conflict. A row-buffer conflict incurs substantial delay in DRAM (typically 70+ processor cycles).

I have already explained to you that the page size is 4KB to 16KB according to one source, and I made the assumption (just for a hypothetical example) that maybe it could be as high as 32KB in specially designed memory setup for an ASIC. And I stated that I don't know what the implications are of making the size larger. I did use the word 'bank' instead of 'page' but I clarified for you in the prior post that I meant 'page' (see quote below) and should have been evident by the link I had provided (in the post you are quoting above) which discussed memory pages as the unit of relevance to latency (which I guess you apparently didn't bother to read).

Thus again I was correct what I wrote before that if the memory space is 128KB and the page size is 32KB, then the probability is not 2^-14. Sheesh.

What number is far off? Even if we take the page size to be 4KB, that is not going to be any where near your 2^-14 nonsense.

The number of memory banks is irrelevant to the probability of coalescing multiple accesses into one scheduled latency window. The relevancy is the ratio of the page size to the memory space (and the rate of accesses relative to the latency window). Duh!

I do hope you deduced that by 'memory space' I mean the size of the memory allocated to the random access data structure of the PoW algorithm.

The page size and the row buffer size are equivalent. And the fact that only one page (row) per bank can be accessed synchronously is irrelevant!

Now what is that you are slobbering about?

(next time before you start to think you deserve to act like a pompous condescending asshole, at least make sure you have your logic correct)

I had PM'ed you to collaborate on PoW algorithms and alerted you to my new post thinking that in the past you've always been amicable and a person to collaborate productively with. I don't know wtf happened to your attitude lately. Seems ever since I stated upthread some alternatives to your Cucooko PoW, that you've decided you need to hate on me. What is up with that. Did you really think you were going to get rich or massive fame from a PoW algorithm. Geez man we have bigger issues to deal with. That is just one cog in the wheel. Isn't worth destroying friendships over. I thought of you as friendly but not any more.