These chips are not going to be that hard to simulate. The design should be pretty simple.
Every ASIC released to date has been worse than simulation. BFL, Avalon, ASICMiner, and Bitfury. Some by small amounts and some (BFL cough cough) not even in the right ballpark. Lots of smart people on lots of teams all ended up high on clockrate and low on power consumption. Even KNC designed their boards to handle 320W despite the nominal power consumption being 250W because it isn't that easy to simulate power consumption. Those DC to DC supplies aren't cheap and the overengineering adds $50+ to the cost of each board ($200 for a Jupiter). Nobody spends $200 extra per unit without a reason.
The reason is that accurately simulating power consumption has proven to be very difficult.
When you consider that 2.7 J/GH is less than half of what either Avalon's (6.6 J/GH) or ASICMiner's (6.9 J/GH) final silicon ended up using it shows there might be some risk to their simulation being too optimistic.
I've asked several times for their final simulation results with 16 miners per chip, but didn't find any? Anyone knows if they showed those?
All that I've found is the figures for 1 core multiplied by 16 (hashrate, power, area). This is like...
