Post
Topic
Board Hardware
Re: [SCAM] Foxminers?
by
delicopsch56
on 09/05/2017, 08:07:25 UTC
No one is even close to shipping production sub-14nm chips. See edit. a big Hmm here...
Engineering samples of test circuits to begin characterizing what to expect from them -- maybe. After the almost-there for a couple years hopes the 16/14nm showed (and even though in now production are still producing horrible yields) most industry pundits still put any 10nm production from even Intel/IBM to be early next year. Even then they still follow that with - maybe.

Edit: Just did a search on Samsung and turned up https://arstechnica.com/gadgets/2017/02/samsungs-got-a-new-10nm-octa-core-chip-with-gigabit-lte-for-flagship-phones/
And from March, http://www.eetimes.com/document.asp?doc_id=1331504
https://www.xda-developers.com/intel-claims-next-chip-generation-ahead-samsung/

wtf? There has been NO mention about that kind of progress in the IEEE Spectrum feeds I get. Those cover beyond-bleeding-edge tech and last I read 10nm was still in test-mode so this bears looking into...
What do you think now? When will the new miners be out with 10nm chips?

Kinda noobie question, I'm still on board with SCAM SCAM SCAM but I don't follow processor dev for years, and I claim no EE,  I think I get the basics of tough miniaturization issue advances on these chips at a high level.  But gen purpose Intel CPU changes sockets about every fucking 15 minutes when they release a new chip\series\generation whatever.  Excluding I'm guessing slower signal travel, higher cost, higher power consumption, mo heat, etc, due to more transistors\etc are there other reasons why can't just release a slightly bigger chip with new socket?.  They are dinky ass chips already to this guy that hasn't been in a serious computer airlocked thumbprint lock room in 15 years. Paradigm shift near miracle in microcode advance is the only other thing I can guess, but I've looked at a bunch of crypto code that seems to my eye to have the slow computation in manual optimized assembler already and uses the on chip crypto code. If anybody can help me understand why those are general reasons or not for skepticism other than all the published documentation red flags on this miner it would help me understand the 'physics' comments much better.  Back in the day it was just cpu bound or i\o bound, and with many of the crypto functions on later Intel chips I have a hard time buying I\O bound for sure.  I get 'memory hard' on L3 cache with cryptonote and such very well, but with Bitcoin and Litecoin I just can't quite get why you folks more knowledgeable of ASIC and\or CPU chip architecture can see this as a big scam so easily.  Can somebody please explain the'why' basics of the believed impossibility to me a bit better? Smiley  I've tried reading hardcore EE stuff but don't know enough to parse it.

Giassyass in advance.