Post
Topic
Board Securities
Re: [ActiveMining] The Official Active Mining Discussion Thread [Self-Moderated]
by
kleeck
on 30/01/2014, 15:45:13 UTC
Not to be dismissive, but the chances of that happening are almost non-existent. Even if the chips were coming out of fab right now and there was a board design that was vetted I would be surprised if AMC could get a 1PH mine running by the end of Q2. You're talking about a 2MW datacenter there, probably 3MW with cooling. That's going to be a massive facility and even the capital infrastructure outside the mining hardware doesn't spring up overnight.
Just look at how long it took the 100GH/s mine to ramp up to 500GH/s, and that's with the Bitfury design which is pulling about half the power per GH/s that this 55nm design will if it hits its specs. They started getting hardware online at the end of July/start of August, and hit 500GH/s at the end of the year.
At 16gh/chip, to ramp up production requires approx. 500% less hardware than with bitfury chips, which means it should be possible to ramp up 500% faster.
In order to have this 1-2PH farm within Q2 we would need to use the 55nm chips @ 1.9GH/s.
Given your job Kleeck, what is the cost of setting up a 3,000 KW, running baseloaded power (24/7), data center by June?   It certainly is not going to fit in the "garage" building that is headquarters.


I'm unsure of cost, as that's not my side of the responsibility, but no, it absolutely will not fit in a garage. Smiley When I say DC, I mean DC and a garage does not qualify. Ken is not going to build this DC, from the ground up, by June. Let's not be ridiculous. There are many, many enterprise-grade DCs that will rent you rack space and power. I'd wager that Ken is already in talks with a couple.

Edit: To lend a little more insight into why a Mining DC isn't that big of a deal I'll share this. One of our Computer Scientist's used to work for a major university. They rented rack space for some massive GPU farm that did all manner of medical science computations from the same DC we use. The machines ran 24/7 with multiple GPUs per server crunching away relentlessly and the DC didn't bat an eye at their power/cooling requirements. This type of thing has been going on since before Bitcoin.