The basic problem with clouds are that they are quite expensive when you have a high volume of traffic and that the servers are not that powerfull.
You end up renting more cloudservers than you would if you used physical servers. Which means that you have to scale your operation wider, and as
always with scaling as the number of servers grow the extra power you get from adding an extra server decreases.
For some compute intensive systems (like mining/hashing) the power decrease very slowly, however for something where data has to be shared the curve can be quite drastic.
Some software (and im not talking hobby coders but software from the 10 original software companies) that has negative scaling after a certain amount of parallelization. I.e. the
time INCREASES when you add more servers. It always has to do with the basic algorithm and then there are 2 options, redesign algorithm or hardware scaleup.
Algorithm design is extremely expensive and not always even possible (?). Hardware scaleup is much easier when you have your own machines and you DEFINATELY do not want to
run the stuff in virtual machines!!!
There is a reason why "real" banks use specialized hardware.
Well now i have to get back to work.
//GoK
I'm not even sure where to start.
You think that if a server crashes 'in the cloud' that the data is unrecoverable?
You don't think that the particular people at Bitomat might have just not known what they were doing?
'In the cloud' isn't some magical meta-space where things don't exist and the data is unrecoverable, it just means the machines are somewhere else and you can programatically request more of them, for you know, like, proper load balancing.
This stuff has been figured out like 10 years ago buddy MtGox just don't know what they are doing.