Post
Topic
Board Hardware
Re: [SCAM] Foxminers?
by
NotFuzzyWarm
on 09/05/2017, 14:19:15 UTC
 Excluding I'm guessing slower signal travel, higher cost, higher power consumption, mo heat, etc, due to more transistors\etc are there other reasons why can't just release a slightly bigger chip with new socket?.  They are dinky ass chips already to this guy that hasn't been in a serious computer airlocked thumbprint lock room in 15 years. Paradigm shift near miracle in microcode advance is the only other thing I can guess, but I've looked at a bunch of crypto code that seems to my eye to have the slow computation in manual optimized assembler already and uses the on chip crypto code. If anybody can help me understand why those are general reasons or not for skepticism other than all the published documentation red flags on this miner it would help me understand the 'physics' comments much better.  Back in the day it was just cpu bound or i\o bound, and with many of the crypto functions on later Intel chips I have a hard time buying I\O bound for sure.  I get 'memory hard' on L3 cache with cryptonote and such very well, but with Bitcoin and Litecoin I just can't quite get why you folks more knowledgeable of ASIC and\or CPU chip architecture can see this as a big scam so easily.  Can somebody please explain the'why' basics of the believed impossibility to me a bit better? Smiley  I've tried reading hardcore EE stuff but don't know enough to parse it.
Giassyass in advance.
That is a lot harder question than you may think as there are several layers to it... For one, no one else has anything approaching what they claim. The world of crypto ASIC design is very very small and frankly no one does anything without the others working along the same lines and being rather vocal about it. Since when it comes to the Next Wonder Miner all we hear from real makers is crickets, well....

On the physical construction and layout of an ASIC I have to defer to a member here with the handle 2112 who is/was an instructor in chip design. You might want to PM them about it. A couple papers on 16/14nm tech http://www.techdesignforums.com/practice/guides/14nm-16nm-processes/ and from 2013 http://www.eetimes.com/document.asp?doc_id=1280773 Somewhere I have several papers from Mentor Graphics on their toolchains for those nodes...

As far as 7nm goes, this is that last I have come across https://arstechnica.co.uk/gadgets/2015/07/ibm-unveils-industrys-first-7nm-chip-moving-beyond-silicon/ Do note that 'chip' is referring to an assemblage of functional test structures - not a usable logic chip.

To me the core issue is, how many cores will fit in a chip? Unlike CPU's, crypto ASICS are extremely simple beasts. Each has serial coms, a smattering of working memory, and a buttload of hard-wired SHA logic cores. Whereas a CPU contains many different circuits for several kinds of IO, along with cache and math, together with the actual few to handful of CPU cores. The latest Intel Xeon has what, 12 physical cores in it? As I recall, Bitmains BM1387 chip used in the R4, s9 and T9 have 250 cores in them, the s9 uses 189 of those chips. In a way GPU chips are similar (high core count) but rather like FPGA their operations can be changed via programming but that ability again leads to speed and power penalties.

Since Bitmain has not released a data sheet for that chip, here is the specs for their last 28nm chip used in the S7 miner https://shop.bitmain.com/files/download/BM1385_Datasheet_v2.0.pdf to poke through. Much of it should still apply to their current 16nm chip

That simplicity does have a down side: Power density. Miner design moved away from large monolithic chips because it is very difficult to power and cool a chip which size-wise *could* these days hold several thousand cores and dissipate >1,000 watts. The BFL Monarch, Hashfast Minion, and a few other failures come to mind...

On the software end, it becomes fuzzier. There 2 things come into play, Stratum which works with the pools to create work and the miner software itself. Stratum docs https://bitcointalk.org/index.php?topic=557866.0

For the miner software, talk to -ck since he wrote cgminer which is what almost all miners use. Considering even the latest miners can run on a single RasPi-3 front-end that rather says then and there that optimizing code (even more as it *is* rather mature) will have little impact.

As for the Samsung 10nm processor: Read into the link and the one from Intel. To claim the 1st-to-market moniker the Samsung is kind of a cheater: Yes the gates are around 10nm but the metalization (connections) are the same 22nm they use with their 14nm chips. Intel tends to shoot for doing the entire process smaller - not just the gates.

*Could* Bitmain be working with TSMC to make a 12nm mining ASIC? (With the same 22nm metal layers they use in their 16nm FinFET's). Sure. When it comes to boutique chips like mining ASICS Bitmain is the one company that certainly has resources to pay for it. But makes no economic sense to me for them to do it: Their BM1387 is king of the hill with no competitors and after the beating the entire industry took finally getting 16/14nm to the consumer market coupled with the still-erratic chip-to-chip performance headaches they (and Avalon, and BitFury) have to deal with there is just-no-point to do it at this time. Even if they did, there is still the problem of boutique chips being last in line for the Foundry production priorities. Just as is still the case with 16nm chips first come the folks who financed ALL of the research involved eg, Apple, AMD, Cisco, Broadcom, et al.