Post
Topic
Board Development & Technical Discussion
Re: [Megathread] The long-known PoW vs. PoS debate
by
n0nce
on 04/03/2022, 16:55:58 UTC
A very large number of "VM" miners but not unlimited - this kind of proof is actually designed to push people to develop efficient VM and container images just for sleeping - but like ASICs there's probably a performance ceiling to how many you can run at a time. In fact it could be the long-term heir to ACISs, since it will cause CPU cores and threads to be hoarded instead of GPUs, FPGAs, and energy-eating ASIC miners, which are rendered useless for a proof of time or proof of idle model.
That's an interesting idea; however, ASICs also already push for efficiency for obvious reasons. If the algorithm works, you will get people (companies) building specialized computers with the bare minimum required computing power to run a shitton of VMs, which again run extremely stripped-down and minimal OSes which will do nothing else than running your mining program; since they consume so little electricity, people will just fill warehouses with them, and... well, energy consumption will add up and you kind of get to the same point you're at now.

Not really. CPUs have a significantly lower clock frequency they step into when they are idle, turning off things like turbo boost as well. A single ASIC uses somewhere in the ballpark of 1-2kW. A low-power desktop PSU draws around 300-500W. As higher-wattage PSUs are more expensive than lower watt ones, and the devices manufactured are just going to be used for "sleep-mining" with no peripherals connected to the PCI board other than CPU, Ethernet, and perhaps a serial port. So they will save money on parts by using low-power items that don't draw that many watts. The form factor of these "sleep-miners" shouldn't be too much smaller than an ASIC, so datacenters can be filled with slightly more of these miners than ASICs and ultimately the datacenter draws less megawatts of power - which make all the difference.
I'll the numbers; just ballpark calculations.
A single general-purpose motherboard with a single extremely multithreaded CPU and a PSU costs a whole lot less than a 2kW miner, right? A Ryzen 5950X costs around 600€, so with motherboard and PSU under one grand. Instead of a $10k S19, you could get 10 of these, then. They have 16 cores and 32 threads, so they should be well suited. In idle, it pulls just 54W, so at the wall, with RAM and everything, it will be 60W. Ten of these will be 600 watts, instead of 1 or 2 kW.
Definitely an improvement, but not the order of magnitude type improvements that one might wish for when thinking about 'idle mining', am I right?

I spent a bit of time looking into Chia mining (proof of space and time) and while the cryptographic idea is that 'computation is free', in reality it's not. In practice, Chia mining rigs are computing new shares all the time on tons of threads and cores, which eats a lot of energy.

Chia is actually getting people kicked off of datacenters because it's taking a toll on their hard disks. So that's not exactly how you want to introduce an energy-efficient mining algo.
Yeah right; I just wanted to show another existing example of a mining technology that at first, on paper, the cryptography looks good and it seems it should barely consume any electricity, but ends up pulling lots of power in the end anyway.