Post
Topic
Board Speculation
[WO] Serious Discussion of Very Serious Professional-Class GPUs
by
nullius
on 20/11/2020, 16:38:27 UTC
Despite its lulzy tone, this is a serious post, seeking advice from the Wall’s professional-class GPU expert brain trust.  I am a Unix terminal junkie, who does not play games or mine shitcoins—not a GPU expert!

The short opportunities continue to grow as fake news fuels the bitcoin HOAX. Little Shatoshi's project has failed as I proved years ago, but the fake news keeps fooling people into losing their money. Smart money will enter high leveraged short positions and keep them open in preparation for the epic meltdown soon to come. Bitcoin shorts will be the greatest trade, possibly in the history of the world.

Thank you. I have decided to sell based on this post.

my 1080ti gpus are now listed in the marketplace

Meh.  I guess that you are shorting your puny old consumer-grade gamer/shitcoiner hardware, because it is not good enough to mine KYC dox photos of me, photos of Lauda, knightly rides superior to a Lambo, Discordian pseudoscience, and other artworks?

  • One or more high-end NVIDIA GPUs, NVIDIA drivers, CUDA 10.0 toolkit and cuDNN 7.5. To reproduce the results reported in the paper, you need an NVIDIA GPU with at least 16 GB of DRAM.

I guess you need at least a Quadro GV100, or something like that?  With a lot lot lot of electricity...

F, pp. 20–21.]



We report expended computational effort as single-GPU years (Volta class GPU).  We used a varying number of NVIDIA DGX-1s for different stages of the project, and converted each run to single-GPU equivalents by simply scaling the number of GPUs used.

The entire project consumed approximately 131.61 megawatt hours (MWh) of electricity....  Approximately half of the total energy was spent on early exploration and forming ideas.  Then subsequently a quarter was spent on refining those ideas in more targeted experiments, and finally a quarter on producing this paper and preparing the public release of source code, trained models, and large sets of images.  Training a single FFHQ network (config F) took approximately 0.68 MWh (0.5% of the total project expenditure).  This is the cost that one would pay when training the network from scratch, possibly using a different dataset.

What the fuck minimal hardware (and electricity budget) is realistically needed for entry-level playing with kitty AI?  Assuming a desire to train one’s own networks...


P.S., proudhon, I already executed an infinitely recursive short on Bitcoin shorters.  At high leverage.  Confirmed science!  Thanks for the advice.  🙃☮