Search content
Sort by

Showing 20 of 62 results by someone42
Post
Topic
Board Development & Technical Discussion
Re: Q:Is there a deterministic private-public keypair generator w/o the BIP32 issue?
by
someone42
on 30/07/2015, 08:17:18 UTC

HD Wallets have a flaw that revealing a private key and its parent master public key will reveal its parent master private key. (Described here[1] and here[2]).

Isn't there a similar concept with master public key and master private key that does not suffer from this issue. (Does not have to be ECDSA. I just want a deterministic private-public keypair generator that can publish its master public key).
Depending on what your use case is, this might be useful: https://bitcointalk.org/index.php?topic=916441.0
The scheme described by Gus Gutoski will protect a parent private key from the release of a specific number of its child private keys.
Post
Topic
Board Development & Technical Discussion
Re: Numerically finding an optimal block size using decentralisation-utility
by
someone42
on 05/07/2015, 06:06:17 UTC

This is not true.  On average every node has to receive each transaction and block only once.  This means that on average each node has to send each transaction once.  Some (fast) nodes may end up sending transactions lots of times.  The closer your node is to miners, the more likely it is to have to send blocks multiple times.

Thank you for helping me find a better value for the bandwidth multiplication factor. My initial guesstimate of 10 was really handwavy and I agree with your methodology. But I do like to use real-world data, so:
  • Number of peers: 20 seems like a good minimum value for a healthy Bitcoin network - 8 for outbound connections, 8 for inbound connections, and a few for crawlers/leechers.
  • tx size: the average tx size of the last 20160 blocks, beginning from block 363900, was 560 bytes.
  • inv messages: My sniffer logged 13743 packets with 18374 inv messages. Including the various protocol overheads, the result was 98 bytes per inv.
  • tx messages: Overhead (as determined from sniffer) was about 90 bytes per tx message - so an average of 650 bytes per tx.
  • block messages: I'll assume overhead is negligible here, since blocks are bulk data. Thus block messages contribute an average of 560 bytes per tx.
The result is (20 * 98 + 2 * 650 + 2 * 560) / (560) = 7.8
I shall update my original post/graphs with this factor.

I'm including overhead because ISPs do count overhead towards caps.

Interesting that nearly half of bandwidth attributable to a single tx is due to inv messages. Looks like there's room for optimisation there.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Topic OP
Numerically finding an optimal block size using decentralisation-utility
by
someone42
on 04/07/2015, 09:28:39 UTC
⭐ Merited by ABCbits (1)
Finding an optimal block size is about tradeoffs - too big, and you have centralisation, too small, and Bitcoin cannot support as many users. How can this be quantified? Here is a simple and general figure-of-merit, "decentralisation-utility":

decentralisation-utility = potential number of full nodes * potential number of users

Decentralisation is represented by the potential number of full nodes. Utility is represented by the potential number of users. The question of finding an optimal block size is then: how can decentralisation-utility be maximised? For example, it's better to reduce potential number of users by 20%, if that would mean 30% more potential full nodes.

Potential number of full nodes

For most residential full node operators, the limiting factors will probably be data caps and upload speed. It's difficult to get combined statistics on this, but I did find one excellent data set. The FCC's Measuring Broadband America Report (see https://www.fcc.gov/reports/measuring-broadband-america-2014) contains a wealth of data about broadband Internet connections in the US. Commendably, they have also made the raw data available from https://www.fcc.gov/measuring-broadband-america/2014/raw-data-fixed-2013. The data is supposedly representative of the general consumer population.

The raw data contains cross-referenced upload speeds and total usage (downloaded and uploaded bytes). I will use total usage to represent "proven" capacity - if a user is able to transfer 20 GB/month of any data, this demonstrates that they are capable of transferring 20 GB/month of Bitcoin block data, and that 20 GB/month fits within their data cap. Furthermore, I calculated the maximum amount that could be transferred at their upload speed, and if this was lower than their total usage, then that becomes their proven capacity.

Here are the results:

This graph represents the proportion of users who are capable of transferring the amount on the x-axis. For example, if we set block size limits so that Bitcoin needs 10 GB/month, then 98% of users can run full nodes. But if we set block size limits so that Bitcoin needs 100 GB/month, then only 67% of users can run full nodes.

I'll assume that there are 10,000# people out there wishing to run full nodes. As block size is increased, an increasing proportion of those people will be unable to run full nodes due to lack of data capacity.

Potential number of users

I'll assume that each "user" of Bitcoin requires 1000# bytes of on-chain space each day. This corresponds to 2 - 4 transactions per day, per user. Each MB of block size then supports 144,000 Bitcoin users. Layer-2 networks like Lightning will increase the number of potential users, as users will (on average) require less on-chain space

Combining these

There is still one more question: how does block size (in MB) influence data usage (in MB/month)? I'll assume completely full blocks every 10 minutes, and that block data needs to be transferred a total of 10 times. This is reasonable, as Bitcoin is a P2P gossip network and transactions/blocks may need to be retransmitted to peers. Transferring transactions/blocks a total of 10 times means that a full node can relay to at least 9 peers1 and that results in fast enough propagation (< 5 hops for 10,000 nodes). So for example, 1 MB blocks would require 44.6 GB/month of data capacity (1 MB * 4464 blocks/month * 10). Edit: see https://bitcointalk.org/index.php?topic=1108530.msg11793460#msg11793460, the bandwidth multiplication factor is probably closer to 7.8.

Combining all that, here is what decentralisation-utility looks like:


The optimal block size is about 3.8 MB.

Some issues with this analysis:
  • The dataset is for the US. I was unable to find a good enough dataset for anywhere else.
  • The dataset is from September 2013. Bandwidth capacity and usage has grown since then, and will continue to grow.
  • Proven capacity (i.e. usage) will underestimate actual maximum capacity. This figure of 3 MB is quite conservative.
  • The optimal block size is inversely proportional to the bandwidth multiplication factor. For example, assuming that the factor is 2 instead of 7.8 leads to the conclusion that the optimal block size is 15 MB. A bandwidth multiplication factor of 2 is the theoretical minimum that could be obtained via. improvements to the Bitcoin P2P protocol.

tl;dr: Trading off decentralisation and number of users in the US broadband environment results in an optimal block size of 3.8 MB. With P2P protocol improvements this could be up to 15 MB.

1I'm assuming a future where IBLT is part of the Bitcoin P2P protocol, so that transaction data doesn't need to be sent "twice". I'm also assuming that most non-relaying SPV nodes use bloom filtering, so that not every transaction needs to be relayed to them.
#All parameters marked with "#" are irrelevant to the optimisation. It turns out that tweaking these parameters ends up rescaling the y-axis of the decentralisation-utility graph (which we don't care about), but will leave the x-axis untouched (we do care about that as it has block size in it).
Post
Topic
Board Development & Technical Discussion
Re: Elastic block cap with rollover penalties
by
someone42
on 12/06/2015, 12:37:55 UTC
Just to make sure everything is clear, the penalty pool is not related to mining pools, despite the usage of the word "pool". I have not talked about mining pools in this discussion.

Sorry my mistake.  I was thinking miners could leave mining pools in this scenario, to benefit from the remaining members of the pool producing larger blocks.
I decided to analyse the situation in the case of mining pools. As a start, I used Meni's parameters as described in https://bitcointalk.org/index.php?topic=1078521.msg11557115#msg11557115.

Assume I have 1% of the total hash rate. I am currently in a 90% mining pool. There are also 10 1% miners. Using the results in Meni's post, my expected income per block is 0.04685 BTC.

Will I earn more by leaving the 90% mining pool and solo-mining?

If I solo-mine, the 90% mining pool is now an 89% mining pool and there are 11 1% miners. Solving for these parameters:
n0 = 7217 (slightly smaller)
n1 = 5945 (almost the same)
p = 0.6906 mBTC (slightly larger)
Penalty paid by 1% miners: f(5945) = 0.4602 BTC
Penalty paid by 89% miner: f(7217) = 3.3043 BTC
Average penalty: 0.89*3.3043 + 0.11*0.4602 = 2.9914 BTC
Reward per block for 1% miner: 5945 * 0.0006906 + 2.9914 - 0.4602 = 6.6368 BTC
Reward per block for 89% miner: 7217 * 0.0006906 + 2.9914 - 3.3043 = 4.6712 BTC

My expected income per block is now 0.06637 BTC, 42% higher than when I was in a mining pool. There is a very strong incentive to betray the large mining pool. I have not done any additional calculations, but I suspect it is also profitable for every individual miner in the larger mining pool to leave and solo-mine, or at least join a smaller pool.

Thus a sensible large mining pool operator should not mine supersized blocks. Meni, I realise you've come to this conclusion another way (higher income means higher difficulty), but this is yet another reason why rollover penalties discourage large mining pools from mining large blocks.
Post
Topic
Board Development & Technical Discussion
Re: Elastic block cap with rollover penalties
by
someone42
on 10/06/2015, 06:36:30 UTC
1. Floating block limits have their own set of problems, and may result in a scenario where there is no effective limit at all.

2. Even if you do place a floating block limit, it doesn't really relieve of the need for an elastic system. Whatever the block limit is, tx demand can approach it and then we have a crash landing scenario. We need a system that gracefully degrades when approaching the limit, whatever it is.
Why not have both an elastic block cap and floating block limits? A common argument against floating block limits is "big miners will create super-sized blocks full of crap to artifically inflate the block limit". This attack is conveniently addressed by your rollover penalties, as the penalty is a function of block size (not of transaction fees), so miners cannot game the system by including self-mined transactions.

Consider the following floating block limit function:
Code:
T = k * median(block size of last N blocks)
evaluated every N blocks.
With k = 1.0 and N = something large like 8064, we have an equilibrium situation consisting of the status quo, where everyone stays under the soft cap of T. If a large mining cartel wishes to inflate block sizes against the will of smaller miners, they must begin creating larger blocks and paying penalties towards the smaller miners, for each block. With sufficiently large N, this is not sustainable.

Let's say Bitcoin experiences sustained, long-term growth and the fee pool/demand increases. Now everyone, including smaller miners, is creating blocks larger than T. Everyone pays penalties, but in doing so, the "penalty" isn't really a penalty; everyone receives about as much from the rollover pool as they pay. After 8064 blocks, T increases to account for this genuine, long-term increase in demand.

There are lots of parameters here, and they can be adjusted to disincentivize a mining cartel from artificially inflating block sizes. For example, increasing N and making the limit function f(x) "harder" will both increase the cost to artificially inflate block size limits. Another possibility is setting k < 1.0 (e.g. k = 0.98), which means that maintaining the status quo has a cost. If miners unanimously decide to maintain the status quo, then no-one is actually penalised because everyone receives as much from the penalty pool as they pay. However, if smaller miners feel that the status quo is unreasonable (because of past bloat from a large mining cartel), they can choose to make smaller blocks and "bleed" penalties from the larger miners. However, I am concerned that setting k < 1.0 might implicitly set an absolute minimum transaction fee.

tl;dr version:
With floating block limits + rollover penalties:
mining cartel tries to artificially inflate blocks => they must subsidize smaller miners with penalties
Bitcoin experiences genuine, long-term growth => miners unanimously include more transactions => block sizes will increase
Post
Topic
Board Hardware wallets
Re: [ESHOP launched] Trezor: Bitcoin hardware wallet
by
someone42
on 03/10/2014, 11:01:30 UTC
Keys stored in the bootloader are public as asymmetric cryptography is used there. There is no security reason why bootloader should stay closed, but we were quite hesitant to open it because that's the last piece of mosaic that our competition is missing from making a perfect TREZOR clone.
Anyone who wants to clone your code can just upload (unsigned) firmware that dumps the bootloader. The STM32F2xx's level 2 code protection does not prevent flash from being read by code that is running within the microcontroller. No, this won't get you the source, but if you're going to be making a 1:1 copy, you don't need source.
Post
Topic
Board Hardware
Re: What ever happened to Butterfly Labs' BitSafe Wallet ?
by
someone42
on 09/10/2013, 04:49:30 UTC
The BitSafe hardware wallet is an open-source project. It did not originate from BFL (see https://bitcointalk.org/index.php?topic=152517.0 for the hardware side and https://bitcointalk.org/index.php?topic=78614.0 for the firmware side), though BFL are the ones commercialising it. Like many open-source projects, work was done in our (allten and I) spare time, on a purely voluntary basis. Since we have other things going on in our lives, sometimes that means that development stalls. Currently, development has stalled. I expect things to pick up in the coming months as I have more free time.

[who the fuck cares]
You may not find a hardware wallet useful. You are probably competent at computer security. A hardware wallet will not offer you much additional security. But what about "ordinary people", who may not be as well-versed in keeping their computers secure? How are they supposed to store and use their BTC? Hardware wallets like the BitSafe and Trezor aim to improve wallet security while remaining useable by ordinary people.
Post
Topic
Board Development & Technical Discussion
Re: Entropy source for smartphone or HW wallet
by
someone42
on 23/09/2013, 13:53:03 UTC
Can we use the accelerometer in smartphone as entropy source (or adding it to HW wallet, costs only about 1 USD)? When generating a new address, or signing, the user is asked to shake the device for a few seconds. That should give plenty of randomness

For HW wallets, there are faster (in raw bits/s) ways to collect entropy for < 1 USD in parts and with no user interaction required. I describe some of them here: https://bitcointalk.org/index.php?topic=127587.msg1434009#msg1434009, but I am sure there are many more ways. These also have the benefit of being internal, so it is more difficult for an observer to guess the state of your entropy pool.
Post
Topic
Board Hardware
Re: [Work in progess] Burnins Avalon Chip to mining board service
by
someone42
on 23/09/2013, 13:39:34 UTC
To those who are getting "Idled 1 miners"-style messages in cgminer, it might be worth checking if you're hitting the cgminer overtemperature cutoff. I don't know how it is where you live, but where I am, it has been getting warmer. As a result, sometimes my Bitburners would hit the default cutoff of 60 degrees and then stop. The overtemperature cutoff is especially suspect if your miners stop once a day (since ambient temperature has a strong daily fluctuation).

The relevant cgminer option is "--avalon-cutoff" eg. "--avalon-cutoff 65" to raise it to 65 degrees. If ambient temperatures are increasing, I would also overclock a bit less, since higher temperatures seem to correlate with higher HW errors.
Post
Topic
Board Group buys
Re: [Group Buy#1] Avalon ASICs CHIPS! Using JohnK as escrow! FINISHED!
by
someone42
on 18/09/2013, 09:51:39 UTC
Well, it looks like it's finally over. This morning, Avalon did a bunch of refunds. This group buy was one of them: https://blockchain.info/tx/730e45ef0d59847973bb9e80e6e05c014787e5c91b01094647a39d7a5ab76199.

(1JrwWrt3TYUzMYFEBLX5hTo1zFsEY6tWZN is an Avalon-controlled address. 1GxGpQrAS345PvYJaW4YANFiJQuRurLVjL is John (John K.)'s refund address for this group buy, as indicated in https://bitcointalk.org/index.php?topic=141672.msg3111564#msg3111564.)
Post
Topic
Board Hardware
Re: Klondike - 16 chip ASIC Open Source Board - Preliminary
by
someone42
on 08/07/2013, 14:03:13 UTC
1000 results:
374/364/175/64/19/1/2/1/0/0
(which is 1007 nonces = close to expected average of 1000)
I'm surprised so many have 0. If only we could test for a "dead" work unit and discard them. Tongue
Then you could increase all hashing performance by a massive 37% (based on that result) Cheesy

... and yet this is something I looked into a long time ago (almost 2 years) early on when I first found out about bitcoin, but never completed my work on it ...
By the looks of those results I should get back to it one day and finish it ... but I doubt I'll bother since it probably won't yield anything Tongue
It started as a program to optimise hashing (and found all the GPU optimisations independently)
Those results look very close the theoretical values for the relevant Poisson distribution (lambda = 1). The first 11 theoretical values are: {367.88, 367.88, 183.94, 61.313, 15.328, 3.0657, 0.51094, 0.072992, 0.009124, 0.001014, 0.000101}. This makes sense, since mining is a Poisson process.

Something I learned: multiple nonces within one work unit occurs about 2/3 as often as a single nonce. So you should be prepared to handle multiple nonces.
Post
Topic
Board Altcoin Discussion
Re: [XPM] [ANN] Primecoin Prerelease Announcement - Introducing Prime Proof-of-Work
by
someone42
on 03/07/2013, 16:53:04 UTC
for a POW algorithm to be useful for blockchain verification it must be

 - hard to derive (for transaction verifiers)
 - controllable difficulty (so as more nodes are added, the difficulty can rise)
 - easy to prove (for relaying nodes)

hash algorithms are good here.  An algorithm with primes sounds like it would be based around the factorising problem (e.g. as used in RSA) - but the question is how Sunny has designed it to be variable - perhaps the difficulty is set by the length of required prime in bits, and the POW is two primes and a factor that meet the difficulty.  This would be very very ASICable compared with scrypt, but I don't think any off the shelf ASIC cores would exist (unlike with SHA256)

Interested to see what Sunny has come up with here.

Will

Here is something which might work. It is based on Pratt certificates (see http://en.wikipedia.org/wiki/Pratt_certificate).

Mining process
The miner attempts to find a large prime n which has the following properties:
  • The most significant 256 bits are equal to the merkle root
  • The prime is large enough to meet the difficulty target
The miner can do this by trying random large integers (the least significant bits are the "nonce") and running many iterations of the Miller-Rabin test. With enough Miller-Rabin iterations, the miner can be quite confident that they actually have a prime.

Proof of work
To generate the proof of work, the miner generates a Pratt certificate for their large prime n. Generation of a Pratt certificate is very hard; it requires the factorisation of n - 1, which is requires exponential time in the size of n. Yet it is easy to verify a Pratt certificate; verification is polynomial time in the size of n. For example, factorisation of a 1024 bit integer is about 7 million times as difficult as a 512 bit integer (according to http://en.wikipedia.org/wiki/General_number_field_sieve), yet it is only 16 times as difficult to verify.

This meets the criteria for a useful proof-of-work: hard to generate, easy to verify, adjustable difficulty and incorporates the merkle root.

Mining pools are more complicated to implement, since integer factorisation is not as trivially parallellisable as hashcash. This might explain why the initial client is solo-mine only.

It also has the property of being sensitive to improvements in factorisation algorithms. This makes it somewhat resistant to ASICs, since algorithm improvements may invalidate ASIC designs, so ASIC developers may not wish to take on the risk.

(Edit: linear -> polynomial)
Post
Topic
Board Hardware
Re: Margin price estimation of USB Block Erupter
by
someone42
on 25/06/2013, 18:11:07 UTC
Don't know if this helps...

Top Left:
A symbol that resembles Phillips
HC574
BE76439
UuG648G

Top Middle:
A030
T2313A
MUDB
OF2670

Top Right:
SILABS
CP2102
DCLOCW
1311+

Bottom Right:
Z1021A1
ZAOP11


Here's my identification of the parts. The layout seems to match this identification. I've also included a per-unit price estimation, based on 10,000 unit quantities at digikey.

Top left: NXP Semiconductor 74HC574 (octal D-type flip-flop), about 0.12 USD
Top middle: Atmel ATTiny2313 (8 bit microcontroller), about 0.72 USD
Top right: Silicon Laboratories CP2102 (USB to UART interface), about 2.30 USD
Bottom right: Alpha & Omega Semiconductor AOZ1021 (3A synchronous buck regulator), about 0.50 USD
Post
Topic
Board Hardware
Re: How can I clock Avalon to 325 MHz and beyond?
by
someone42
on 21/06/2013, 11:35:19 UTC
Question: what should be buf[6] and buf[7] for 325MHz, 350MHz, and 375MHz?

   if (frequency == 256) {
      buf[6] = 0x03;
      buf[7] = 0x08;
   } else if (frequency == 270) {
      buf[6] = 0x73;
      buf[7] = 0x08;
   } else if (frequency == 282) {
      buf[6] = 0xd3;
      buf[7] = 0x08;
   } else if (frequency == 300) {
      buf[6] = 0x63;
      buf[7] = 0x09;
   }

Use these at your own risk!

For 325 MHz: buf[6] = 0x2b and buf[7] = 0x0a
For 350 MHz: buf[6] = 0xf3 and buf[7] = 0x0a
For 375 MHz: buf[6] = 0xbb and buf[7] = 0x0b
The meaning of these values is documented on page 6 of the Avalon ASIC (A3256-Q48) datasheet.
Post
Topic
Board Hardware
Re: BFL BitForce SC Firmware source code
by
someone42
on 18/06/2013, 02:09:56 UTC
What do you think could be added to the block header?

Could you also explain what the effect of the chips being less-than-optimal.
I personally don't think anything will be added to the block header within 5 years, since that would have the effect of making lots of existing ASICs useless. Also, if anyone wants to "add" some extra data to a block, they can already do this (in a way that's compatible with all existing miners) by using the coinbase.

What I mean by less-than-optimal is that since the SHA-256 constants are not hardcoded, some logic can't be optimised out at compile time. Within an ASIC, this would manifest as increased die area or power consumption. But I have no experience with ASIC design, so for all my ignorance it could be an insignificant 0.0001% increase or an embarassing 10% increase.
Post
Topic
Board Hardware
Re: BFL BitForce SC Firmware source code
by
someone42
on 17/06/2013, 12:41:22 UTC
From reading the firmware source and looking at the released datasheet, the BFL chips interestingly do not have certain SHA-256 constants hardwired. The firmware is responsible for setting the SHA-256 initial hash value for the first hash, as well as the padding and length of both the first and second hashes.

What this means is that (for example) if an extra field were to be appended to the block header, the BFL chips could handle this change (via. a firmware upgrade), but the Avalon chips couldn't. This also means that the BFL chips are slightly less-than-optimal (I have no idea how much less than optimal), since some extra gates will be required to handle the possibility that those "constants" can change.
Post
Topic
Board Hardware wallets
Re: Lets talk hardware wallets...
by
someone42
on 17/06/2013, 12:19:07 UTC
I like Trezor and I will most likely be ordering a few.

Hardware wallets are needed in order for Bitcoin to get to the next level. Getting the average user to secure their computing environment against malware is a next to impossible task, and hardware wallets circumvent this issue.

I suspect we'll be seeing more projects of similar nature materialize in the coming months.
allten and I have been working on the Bitsafe, another hardware wallet. Its development history actually extends goes back further than Trezor's. There are assembled open-source development boards available (see https://bitcointalk.org/index.php?topic=152517.0;all), as well as prototype open-source firmware (see https://bitcointalk.org/index.php?topic=78614.0).

The Bitsafe project was picked up by Butterfly Labs, who are helping us bring it to market later this year. They have (understandably) decided to not do the preorder thing.

Does anyone know if Armory would be supporting this? I hope armory would support this so then I don't need a whole offline computer just for signing transactions.
I spoke to etotheipi at Bitcoin 2013 and he was very enthusiastic about hardware wallets. He felt that they complemented Armory well. Anyway, I think Armory is well-positioned to support hardware wallets, after all, a hardware wallet is basically an offline signing wallet.
Post
Topic
Board Hardware
Re: Klondike - 16 chip ASIC Open Source Board - Preliminary
by
someone42
on 03/05/2013, 13:45:44 UTC
Generally start with one say, 0.1uF capacitance, and if it doesn't work, try changing the value lower or higher (depending on the frequency you need decoupling at) - once that is ideal, the best thing to do is add more of the exact same value capacitor in parallel.  this increases the capacitance but also DECREASES the paraistic inductance, so it gets BETTER and has more decoupling capacitance.

note : People used to add say, 0.1uF, 10nF, 100pF caps in parallel because in theory this would give a wide range of decoupling

the problem is it produces anti-resonance where the coupling gets worse.  it's a very, very tricky thing to try and fine tune, and should be avoided in 95% of cases.

my suggestion - put pads for lots of 0603 0.1uF capacitances, but only populate the reference PCB amount.  if you need more or have to tweak, the pads are right there for it.  It's standard practice to have pads for parts you don't actually populate going into production.
Thankyou for this. My experience has been exclusively with low-frequency stuff.

I think you could easily adjust the core voltage with some sort of programmable resistor on the buck reg. Not sure if such a thing is readily available but it should be. You could probably use a few FETs shorting out a binary series of resistor values to adjust the voltage divider.

eDiT: Oh geez, here you go...

http://ww1.microchip.com/downloads/en/DeviceDoc/22107a.pdf
(Now that I think about it you could probably use an analog output from the PIC as control voltage on the regulator but that would take some digging into to figure out)
According to the IR3895 datasheet, if Vref is grounded, then the output voltage can be adjusted by changing the voltage on the Vp pin. So you can easily add the capability to adjust the ASICs' core voltage.
Post
Topic
Board Hardware
Re: Klondike - 16 chip ASIC Open Source Board - Preliminary
by
someone42
on 02/05/2013, 17:36:17 UTC
Smaller is harder but I'm not sure it matters. One of those magnifying circular light dealies is recommended. Parts count is the killer. I just finished the prelim parts list and pushed it to github. There's 320 parts on that 10x10cm board. Ouch!

(Someone please send me a Pick n Place machine)
70% of those parts are decoupling caps for the ASICs. Is it really necessary to have 14 per ASIC? As a back-of-the-envelope calculation, I get about 5 nC of charge per clock cycle (based on 1.5 A @ 282 MHz). With 0.8 uF of lumped capacitance, neglecting ESR/ESL, that's about a 7 mV drop, which is small. Maybe you can get away with less? Maybe you can use larger capacitance values but fewer caps overall?

I suppose the only way to know is to do some in-circuit testing on actual ASICs.
Post
Topic
Board Hardware
Re: Klondike - 16 chip ASIC Open Source Board - Preliminary
by
someone42
on 30/04/2013, 10:09:54 UTC
Quote from: BkkCoins
I looked on heatsinkusa and could only see a $12.35 option for 4" wide cut to 4" length. I guess I didn't find the right one. Can you give me a link to the $5 one? I'd like to see and if it's right then I'll just tell people they could order those, though I think even cheaper options could turn up.

I see they have a 4" wide option with 2" high fins for $12. Maybe that is large if we want to stack them close together.

They also have 3.5 and 4.2" inch sizes with .75" high fins for about $4 and $5 respectively. If 3.5x4" is big enough, $4.36 is pretty cheap.

Those 2" high heatsinks are quite beefy. The site claims a thermal resistance of 1.4 degrees C/watt for a 3"x4" section (presumably with natural convection). With such a thermal resistance, it looks like you can dissippate the heat of 16 chips with a slow/quiet fan, or perhaps no fan at all.