Search content
Sort by

Showing 20 of 26 results by catfish
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
Finally - having worked out the voodoo magick required to get all 5 PCIe slots on the P8Z68-V logic board working with GPUs installed - I've upgraded Frame Rigs mk1 and mk2 so ALL available PCIe slots have a GPU card fitted, and upped the capacity of the PSUs so all GPUs can be reasonably powerful cards. The two rigs contain, in order of hash-rate achieved (i.e. how far each card in each class will overclock... some are a LOT better than others!):

1x Sapphire 'old' 5850 (awesome card - fastest in the Catfish farm and runs cool (67˚C) whilst whipping the rest for hashrate)
1x Sapphire 'extreme' 5850 (not so 'extreme' - I hate that word, overused to the point of meaninglessness - but clock up between 370 and 390 MH/s)
2x XFX 'Black Edition' 5850 (very disappointed with these - clocked at 765 MHz with a 775 recommended upper limit!!!)
3x 'Value' 5830 (from overclock.co.uk - all have broken fan shroud flanges but are perfectly functional, overclock from 800 MHz to 990...)
2x 'Value' 5830 (as above, second purchase, cards look identical but recommended upper limits are different, one can't sustain 990 MHz, one can)
2x Sapphire 'old' 5770 (dual-slot retail cards, clocked at 850 MHz but go all the way to 1000 - 236 MH/s with only one PCIe power cable)

http://www.catfsh.com/bitcoin/open-frame-mk3/slots-full-above.jpg

My first mining card ever purchased was the 'old' retail Sapphire 5850 - £175 from PC World (yeah, I know) - but it's a monster and no other card I've bought since can match it for hash rate. Weirdly enough, rotating the second rig 90 degrees has hugely improved airflow - all cards are overclocked but all but 3 are running stably below 70˚C. The ones above 70˚C are 70.5, 71 and 71... not bad for 2kW of space heater!

http://www.catfsh.com/bitcoin/open-frame-mk3/line-of-radeons-1.jpg

Nice to get rid of those anaemic single-slot 5770s at last - they will be going in a 'final' rig - assuming I can kick this damn mining-hardware addiction Smiley

http://www.catfsh.com/bitcoin/open-frame-mk3/line-of-radeons-2.jpg

This entire 5-card rig runs each card between 57˚C and 70.5˚C, and doesn't seem to be overstressing the CoolerMaster SilentPro 1000W PSU. The word 'Silent' is laughable, since 11 GPU fans running flat out makes a HUGE amount of noise Cheesy

http://www.catfsh.com/bitcoin/open-frame-mk3/snakepit-1.jpg

Apart from the voodoo on the 5-card logic board, all other cards are connected to their logic boards using unpowered x1->x16 PCIe extender cables. It's hard to make this sort of thing tidy, but I'm convinced that my choice to use x1 thickness ribbon cables (as opposed to full-width x16->x16 PCIe extenders where possible) improves airflow, and hence cooling performance.

http://www.catfsh.com/bitcoin/open-frame-mk3/snakepit-2.jpg

Yup... need to use my Nikon and proper macro lens  Undecided  Undecided
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
hold up, that card on the left has two extenders between the motherboard and the card - that works ok?

Valid question. I'd also like to know!
Yup, it works. Two of my rigs need the leftmost card requiring two extenders, since the CPU gets in the way.

I just use an x1-x1 from the logic board, then an x1-x16 into the graphics card. Don't need Molex augmentation so far, though I don't put my most power hungry cards on these daisychained extenders.
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
The Catfish Bitcoin Mining Shelf Rig Mk II - nearly complete:

http://www.catfsh.com/bitcoin/shelf-rig-mk2/11-nearly-complete-1.jpg

Remaining things to do - remove CD-ROM (was only there for installing Linux on the three boards), screw in another wooden support for the hard drives, secure the PSUs where they currently sit (under the temporary boarding that the HDs sit on), and add two front doors - open mesh (like a rabbit hutch) or perspex frame with a single row of PC case fans horizontally down the centre, to blow air *onto* the top of the GPUs.

The aim is to get a furniture-like modular piece that can be moved anywhere and vents the majority of the hot air upwards, where it could be collected in an airbox and extracted using an extractor fan (or used to heat the house).

This one is consuming 2000W at the wall and is running really cool for air - haven't started overclocking in earnest yet but there's loads of headroom to come. I've finally come to a design I'm satisfied with, and will be rebuilding my Mk I Shelf Rig into the same design (I've got another two boards and 12 cards to use)...
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
Wow cool that is a nice setup - where do you intake/exhaust the power supply fans though?  How many psu are you using?
If you're talking about mine, firstly thanks Smiley - otherwise, the design is modular.

Each module is a Gigabyte microATX board with 4 PCIe slots, and 4 GPUs on extenders. This fits in a cube, so there are three cubes in a row on the second shelf of the three-shelf unit.

The top shelf has the middle slat removed to allow hot air out.

The middle shelf has the middle slat removed to install three domestic extractor fans, which blow air UP into the electronic cube modules. The logic boards and GPUs all use extender cables to drop the inputs (ATX 24-pin, ATX12V 4-pin, and all the PCIe 6-pin GPU power cables) down below the second shelf.

The bottom shelf has all three slats intact, and is where the PSUs stand. Each PSU (one per module) stands on its side so it's pulling air in horizontally and blowing air out of the front of the unit. The leftmost module has 3x 5850 and 1x 5830, hence is using an 850W Cooler Master. The middle module has 4x 5830 and is using an 800W Corsair. The rightmost module has 4x 6950 and is using a 1000W Cooler Master.

I want to put in a sub-shelf for the hard drives - currently there's a random piece of MDF board sitting on the PSUs, and the HDs are sat on top of that. However, there's PLENTY of room on the bottom shelf - the middle shelf is somewhat cramped for access Smiley but the PSU level is not. It runs surprisingly cool, but is the result of a LOT of R&D (i.e. previous failures, and valuable help from a member of the MMC pool I use).

I'm going to break the habit of a lifetime and actually *finish* this one *completely* before building another one Cheesy
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
^^ Best temperature of all my designs so far! Loads of help from a chap on the MMC forum though. Three domestic extractor fans blow air up from the second shelf. Temperatures range from 55˚C to 77˚C - all cards very mildly overclocked to 850 core just to see if it works.

It's been running since I finished last night - temps are well under control. The entire airflow was worked out to work *with* natural convection, and not fight against it like my previous attempts. The system even runs without the extractor fans, but the 6950s get a bit toasty.

The cards are, from left to right:
5830 XFX new design
5850 XFX Black Edition
5850 Sapphire 5-heatpipe 990MHz overclock monster
5850 Sapphire Extreme
5830 Peak Value
5830 Peak Value
5830 Peak Value
5830 Peak Value
6950 XFX mk2 no BIOS switch
6950 XFX mk1 with BIOS switch
6950 Sapphire big-fan no BIOS switch
6950 Asus DirectCU II overclock monster

I want to finish it but I've got another one to build!!!! Takes a day to build, though the design took ages. The hardest bit is the aluminium GPU bracket support - you NEED to get that absolutely *bang* on - half a millimetre out on a screw hole and the card will hang at an angle and touch another GPU.

Believe me, temperatures are NOT a problem... there's more space between each GPU than inside gamer PC cases, remember...

The shelf kit costs £14 from Homebase. Or was it £12? Whatever, it's cheap. Smiley
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
^^ With similar curiosity - I have two questions only:

1. How much does a complete kit cost (i.e. board, heatsinks, necessary special cables, etc.)?

2. Can I run the mining software on a pool without needing to know VHDL or assembly-level code?

Actually, there's a third question. Assuming that this is an FPGA board, and from the hash rate you quote, it sounds like a fairly decent spec FPGA - how much does the software licence cost in order to load the BTC Miner gate logic onto the FPGA? I don't know much about this level of engineering, but have heard many stories about requiring proprietary software to load your own 'code' onto the FPGA itself, and that this proprietary software costs an absolute fortune.

I'm mining for the money (not that it's particularly profitable right now) so my mining rigs would be considered 'business' in court (indeed, the hardware was bought by the business I own). So using 'evaluation' or 'academic' licences would be fraudulent.

I'd love an FPGA setup (after seeing the number on the watt-meter attached to my Shelf Rig above) but I have a feeling that the FPGA-loading software costs 4 figures. And since I can't design my own gate-level logic, I wouldn't have any use for said software other than initially loading up the bitcoin mining logic.

Doing it for fun would be great - I'd like to purely to experience a new type of hardware and software - but I'd feel uneasy as hell about ripping off a software package worth thousands. I can't lie that I'm 'academic' or 'evaluation' if I'm actually making money using the damn thing...

(by the way, this is only my opinion, and not casting a judgement on anyone else here - prejudice isn't my style, especially when I have no information)
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
^^ Marvellous. Must email ztex about their boards... been thinking about it for ages but not done anything...
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:19:00 UTC
Why add a second powered riser? I've been curios as to what these are used for.
I'm really curious to see what would happen in the long run.

Touch the single 12v wire going to your motherboard connector to see if it's hot,
 It may be providing ~45 watt per GPU, or 30 amp on one wire. (rated for ~8amp)

(45w according to http://blog.zorinaq.com/?e=42 using 5970)

I personally use powered extender when going past 5 gpus.
I'm interested in this, even though I've given up on 5+ GPU setups - too much aggro getting them to work, and logic board / PSU costs increase non-linearly above 4 GPUs per board.

But my incompetence at getting a 5-GPU rig working (took ages, though *did* manage it eventually) may be explained partly by this issue.

I count 2 +12V wires on the fat EATXPWR connector, and another two +12V wires on the ATX12V power connector. My boards mostly use the four-pin ATX12V sockets as they aren't 'extreme performance' boards and aren't rated for BIG wattage CPUs, but the 5-GPU board I was messing with had the 8-pin version, which has four +12V wires.

So for the logic board and expansion slots, you've either got four or six wires, each of which is 18AWG according to the ATX PSU spec (though most of the high-power PSUs use 16AWG - my Corsair does, for example).

18AWG cable is 'recommended' for up to 40A at 12V with cable lengths 3 feet and below (a reasonable assumption in computer PSUs), according to this link:
http://www.rbeelectronics.com/wtable.htm - I am not 100% sure of the veracity of its claims, so YMMV, but I'm assuming that the electronics guys writing that website mean that the 'recommended' gauge in the table won't be running within an inch of its life.

Hence if the table suggests that a 12V requirement of 40A would be *ideally* served by 18AWG cable for 3 feet or less, then I assume that this *doesn't* mean that said cable will be hot to the touch and starting to soften its insulation...

On top of this 'absolute' max current the cable will carry, there's the ATX spec, which IIRC allows a max of 240VA on any wire. So for our 12V feeds for our GPUs etc., the ATX standard says 'no more than 20A please chaps'.

So I'm guessing that even with my 'small' logic boards, the four 12V feeds can give a maximum of 960W *just to the CPU and expansion slots* - that's at the maximum 20A per wire.

But if 18AWG is OK for 40A on short stretches, I can't see it even getting *warm* at 20A. Temperature of cabling is much more likely to be influenced by the hot air expelled by the GPUs and other components in a bitcoin rig, if what I say above is anywhere near true.

Using 'warm cables' as a diagnostic measure is therefore a BAD idea since it's going to result in false positives ALL of the time. If 18AWG is good for 40A as 'recommended' then I doubt it'll get hot until over 50A - at which point you're talking about 1200W through ONE wire, which no GPU will pull. Note that I'm ignoring the CPU and chipset here, because as a bitcoin mining rig, the CPU is underclocked to the minimum and only one core runs. If you're running a 250W CPU as well then the power available to the PCIe slots *may* become a problem. But the cables shouldn't get hot, surely??


Personally, I'm much more concerned with GPUs pulling power through the nice PCIe extenders a lot of us use. The ribbon cable used, even dual layer, is much thinner than 18AWG and I *really* don't know the safe ampacity of the conductors. Personally I'd like to keep the GPUs pulling ALL their required power from their 6-pin connectors - even a single 6-pin connector (with three 12V conductors at 18AWG) would be safe to a 720W load, assuming the wires followed the ATX spec. Ignoring the spec, using one conductor to supply an entire GPU would be safe up to 480W. I'm happy with that, which is why I'm not too worried when I see 6-pin converter cables that simply loop two 12V pins to the same cable, and the same with the ground. The cable *itself* can carry the current.

The ribbon cables are a problem though. Molex-augmented extenders are IMO irrelevant unless your logic board's *LOGIC* limits the 12V current supplied to the PCIe slots artificially. If you *really* need the current in the extender, then the limit isn't the Molex or the 18AWG yellow wire - it's the thin conductor in the ribbon cable that the yellow wire is spliced to.

I've got a Molex-powered extender from Cablesaurus. The Molex connector takes a single stretch of 18AWG and splices into the ribbon cable. How can the Molex augmentation supply more power than the ribbon cable can already safely take? In fact, since the ATX spec allows 20A on those 18AWG Molex yellow wires, the powered extenders could possibly make fire *more* dangerous, by allowing the thin ribbon cable to draw more current than the slot itself would supply.


Can anyone tell me the wire gauge and rating of the ribbon cables used in our PCIe extenders? It's not usually a problem, since IME the power drawn by GPU cards is primarily pulled from the 6-pin power feeds, with only residual power taken directly from the board (obviously only talking about GPUs with 6-pin power connectors here, of course!). I've got a Mac Pro which has a mad number of sensors to query, and it's telling me the exact amperage going through the two PCIe power extensions (I've got Y-cables on these...), along with the exact amperage going through each PCIe slot on the logic board. If this behaviour (preferentially using the 6-pin supplies before taking from the PCIe slot) is some cool Mac OS X thing, then my theories may be useless - because none of us run Mac OS X as a main mining OS, and all my big-power GPUs are on Linux boards. But if this is how the GPUs are designed, and also due to higher resistance on the PCIe extenders (making the distribution an electrical issue), I don't see how splicing an 18AWG power cable into the *start* of the existing extender ribbon cable will reduce the resistance of that ribbon cable.

Sorry for big post, thinking out loud.



TL;DR = I don't think powered PCIe extenders are of any use beyond the edge-case where your logic board *deliberately* restricts available current to the PCIe slots when all are populated. And on the other hand, the Molex connector is capable of delivering enough current to melt and set fire to the ribbon cable it's spliced into. I don't use them. Happy to be proved wrong, or any assumptions challenged though Smiley
Post
Topic
(Unknown Title)
by
catfish
on 25/01/2020, 02:18:00 UTC
Heh - lovely wooden frame mate and sharp shooting.

Just a point and shoot for these pics, but I've upgraded my wooden frame to wood and aluminium... two frames now running (evo 1 and evo 2 - the second packing two logic boards and two PSUs)... I've got a right motley crew of GPUs, so flexibility is the key. I wish I'd removed that damn item sticker from the wooden dowel I use for GPU card supports though - a little finishing would make these look reasonable Smiley

http://www.catfsh.com/bitcoin/miners/frame-rigs-1.jpg

http://www.catfsh.com/bitcoin/miners/frame-rigs-2.jpg

Sadly such enthusiasm leads to this (and very hot extension cabling until I uprated it - oooops!)...

http://www.catfsh.com/bitcoin/miners/power-meter-1.jpg
Post
Topic
Board Archival
Re: Pictures of your mining rigs!
by
catfish
on 08/04/2013, 23:06:29 UTC

that's a good design, I am thinking something like this, my previous design isn't that stable for the "second floor" cars.

I had some materials laying around, so the design was based on "hmm, what's in stock?"  Grin
And to think that I received a bit of mockery at the time for my aluminium and wooden crazy-rigs... I never set fire to anything in the end, even with the 12-GPU wooden shelf rig (the second - long since dismantled - only had Cool pulling its power from a single household supply...
http://www.catfish.sh/bitcoin/shelf-rig-mk2/13-early-shelf-rig-power-1.jpg

This was run non-stop, 24 hours a day, 7 days a week for getting on for a year IIRC. Didn't ever turn the central heating on during the 2011/2012 winter - I even had to keep the bedroom windows open to let some heat out in December. Noisy, unpleasantly hot, and eventually unprofitable (until the recent insanity began).

Now I'm virtually all FPGAs and am desperate to get more hashrate (I'm a committed, but very small, miner), but FPGAs are expensive and only compete with GPUs on power consumption - in my experience (best hash per £ GPUs, so 5770, 5830, 5850, 6950 all with special tweaks; and a combination of Ztex's 1.15d and 1.15x FPGAs, so all Spartan-6 units).

I lost out on bASIC Tom even though I considered my due diligence pretty good; he had a record of delivering and didn't send off my risk analyst radar like BFL did. And whilst Avalon *have* delivered ASICs, I can't take days off work glued to my computer, propped up on amphetamines and repeatedly hitting ⌘R on the slim chance that I'll get onto the order webpage - and even then needing the luck to be able to actually *pay* and get a guaranteed build slot. I fear now is far too late anyway and Avalon have already sold their entire short production run, so it's 'start the process again'. I wouldn't be surprised if BFL go the same way, since there are too many rumours of chips not making the cut, and like Avalon they would only have a very limited number of discrete chips to package and test. With all the pre-orders, I'm sure I've missed the boat Sad

Sure, I have the option to spend another £5k on 25 of Stefan's wonderful little boards and build another cool looking rig. It will be ultra-efficient and use little power. But, like the current one, it only has around 5 GH/sec. The ASIC devices are looking like around 50 GH/sec for something around £1k to (probably) £4k. I had a paid-for ASIC order in along with the other early adopters, but unluckily my order was with Tom and not Avalon. So if the ASIC boat has gone, with terahashes of power in various states of assembly at BFL and Avalon (et al) already earmarked for customers, buying as many FPGAs as possible *now* doesn't sound like it'll ever pay off Sad And that's even if I could get the FPGAs in the first place. Perhaps I should sell my existing FPGA rig for Bitcoins and simply invest (buy and hold only)...

Anyway back on topic, aluminium and wood with upright GPUs on PCIe extender risers (daisychained!) has proven over a year or so (IME) to be the best 'cheap' design. No ducted air analysis, no dead spots, easy to keep cool with a single desk fan, components accessible. Paradoxically, it's quieter than a game-boy-spec-extreEEEMEEE!!!!11! case with clever fans and also the best orientation for the GPUs. Here's one I made earlier (August 2011):

http://www.catfish.sh/bitcoin/open-frame-mk2/mk2-complete-light.jpg

...and what it evolved into (half of which is still running as I type...):

http://www.catfish.sh/bitcoin/open-frame-mk3/slots-full-above.jpg
Post
Topic
Board Archival
Re: Pictures of your mining rigs!
by
catfish
on 29/09/2012, 15:33:30 UTC
Ah... looks like I forgot to post a couple of my rigs in here after all... might as well before I rebuild them, which will be happening soon. These aren't new - the FPGA rigs were built in May this year:

GPUs (two of these, only one still going):
http://www.catfsh.com/bitcoin/shelf-rig-mk2/12-frame-A-upgrade-1.jpg

FPGAs:
http://www.catfsh.com/bitcoin/fpga/first-ztex-fpga-test-1.jpg

http://www.catfsh.com/bitcoin/fpga/ztex-5-fpga-rig-1.jpg

http://www.catfsh.com/bitcoin/fpga/ztex-20-fpga-rig-3.jpg

Post
Topic
Board Hardware
PC ATX PSUs for FPGA clusters? 5V / 3.3V Rail Issues?
by
catfish
on 13/04/2012, 21:03:53 UTC
Quick question. I've decided that the 'cleanliness' of the 12V DC supplied by the decent quality PSUs invested in for my GPU mining farm would be better than using 5 or 6 'black-box' cheap 'router-type' power supplies?
Post
Topic
Board CPU/GPU Bitcoin mining hardware
New XFX 58xx cards - red square pattern failure in Catalyst???
by
catfish
on 21/11/2011, 10:45:42 UTC
I'll try to keep this short. I need some help - anyone got any ideas?

Post
Topic
Board CPU/GPU Bitcoin mining hardware
EMI and PCIe extenders. Problem or just internet forum noise?
by
catfish
on 02/11/2011, 22:32:13 UTC
delete
Post
Topic
Board CPU/GPU Bitcoin mining hardware
Random failed GPUs - definitely dead?
by
catfish
on 23/10/2011, 14:04:25 UTC
I have around 20 GPU cards due to my little mining operation. Four of these cards have died - one appears to have been DOA, and the others died after very little use. None have been overclocked idiotically - and importantly, I *never* overvolt my cards.

Interestingly, these 'dead' cards pass the BIOS POST. I get nothing but a black screen, but the monitor doesn't say 'going to sleep' due to no signal - it's actually a real 'black screen'.

All of them will spin their fans up to normal speed when connected to power and PCIe in a test board. They also rev the fans up to full speed if I pull out the PCIe cable (all cables - power and PCIe-extender - are known to be good).

Very oddly, if I configure a normal system (Gigabyte H61M-D2-B3 board, one x16, three x1 PCIe slots, all filled using x1->x16 extenders) with one of the cards being the 'dead' card, Linux will recognise it when interrogating the PCI bus (i.e. all four cards are shown when lspci is executed).

It's only when I install the ATI drivers (I use 11.6 on Ubuntu 11.04 'natty') that any aticonfig command locks up. It doesn't lock the machine - I can ctrl-C out of the command. But there's something wrong with the card, and it interferes with other cards on the PCIe bus too.


Has anyone else had this behaviour, and does anyone else know how to fix it? I'd have thought that a burnt-out card would either fail to operate at all (i.e. no fan, no output from DVI port) or would mess up the logic board to the point of failing the POST. One Asus board I own has separate LEDs for failure points in RAM, CPU and GPU - if the RAM is faulty, the RAM lights up, if the GPU is faulty, the GPU lights up, etc. My 'dead' cards don't light up the GPU LED and the POST seems to complete fine.

What concerns me is that (a) I may be chucking away a perfectly usable card - where I may just need to flash the BIOS or something, or (b) that continuing to attempt to resuscitate the cards could burn out a logic board and three other expensive GPUs.

Incidentally, all but one of the dead cards were XFX brand. One was a 5850 Black Edition, which I was *hoping* to be an overclock-monster and a strong card for hashing... guess not Sad

Is there anything I can look for that would be an easy fix? There don't seem to be any burnt-out visible components on the board.

Or is the GPU field yet another consumer-electronics scam where even the tiniest, fixable failure means 'throw it away and buy a new one'? Sad

Mining profitability right now is *not* looking good so I'm not keen on buying new hardware just yet...
Post
Topic
Board CPU/GPU Bitcoin mining hardware
PowerPC, Altivec and massively parallel farms
by
catfish
on 20/10/2011, 11:02:50 UTC
OK, this is a mad blue-sky idea - feel free to shoot it down (but please explain with facts and numbers, rather than just insult me for coming up with an idiotic idea) Cheesy

I remember back when I was crunching for Seti@Home. I'm a Mac and OS X enthusiast (note that I'm *not* an Apple fanboy - some of their business practices and products piss me off) and have some fairly serious Mac gear. The calculations required for Seti are similar embarrassingly-parallel computations that lend themselves to distributed computing. At the time, before GPGPU code became the norm, CPU crunching was the competition, and I once got to the top of the tree (recent average credit, #1 worldwide) with my mad Quad G5 PowerMac.

The Quad G5 PowerMac was an insane machine - stupidly expensive, needed liquid cooling for its CPUs, and when crunching work units, the northbridge ran at a nice constant 110˚C. That is *not* a typo, nor do I work in Fahrenheit. Eventually the G5's liquid cooling system started to fail, so I moaned a bit to Apple about only getting a year out of a £4,500 machine, and they gave me an 8-core Xeon Mac Pro for the price of a Mac Mini, which was nice of them (but a different story)...

The reason why the Mac G5s were so highly-represented on the Seti leaderboard (against Opteron servers and similar Intel server kit) was due to the PowerPC chipset, and specifically the vector processing instructions (especially the 'vector permute' operation). This gave a huge shortcut, allowing the Seti processing (IIRC, lots of FFTs) to do the same work in many fewer clock cycles.


GPUs operate on similar principles and are clearly the fastest hashers for Bitcoin mining. But they use a LOT of power, and the cost of electricity is rapidly outstripping mining profit. Hence miners are looking to lower-power solutions.

The FPGA approach is being actively worked on by a bunch of enthusiasts, but the FPGA boards themselves are expensive, they're not simply 'plug and play', and there's the question of whether one can legally make money mining when very expensive software licences are required to load the code onto the FPGA.

I was always impressed with the optimisations one could achieve with the PowerPC instruction set and the Altivec instructions - PowerPC has loads of registers, and the fancy 'vector permute' instruction. There's no way an old Apple G4 CPU would be a fast hash engine for BTC mining, but with massive optimisation, and new CPUs (the G4 is still produced for routers and other embedded kit - low power consumption is a priority), would a big parallel array of G4 boards be anywhere near usable for mining?


TL;DR: the old PowerPC chips Apple used (G4 onwards) had vector processing instructions. Could highly-optimised code run well for BTC mining, if one focuses on hash per watt? FPGAs will win but they have a monstrous upfront cost, whereas a load of dual-core G4 and G5 Macs can be acquired relatively cheaply. Making a bare-bones machine would consume even less power. Does anyone know - and has anyone tried writing a highly-optimised hashing function for Altivec?
Post
Topic
Board Mining software (miners)
Mac OS X - Multiple GPU Mining? Possible?
by
catfish
on 06/10/2011, 07:22:06 UTC
-
Post
Topic
Board CPU/GPU Bitcoin mining hardware
Radeon 6790 vs 6770 - what are they??
by
catfish
on 25/09/2011, 17:57:53 UTC
deleted
Post
Topic
Board CPU/GPU Bitcoin mining hardware
Shroudless GPU Cooling?
by
catfish
on 12/09/2011, 12:03:17 UTC
Açık çerçeve platformları inşa eden çoğumuz, farklı malzemeler de olsa ve tasarım kalitesi ve estetikte büyük farklılıklar olsa da hemen hemen aynı düzenleri kullanıyoruz!

Mantık kartını teçhizatın tabanına yatay olarak koyma eğilimindeyiz, daha sonra GPU'lar, bağlantı noktaları size bakacak şekilde dikey olarak durur (teçhizata kafadan bakar). PSU'lar ya mantık kartının arkasında ya da yanlardan birinde (ya da her ikisi de çift PSU ve çok sayıda GPU ile).

Bununla birlikte, neredeyse herkes GPU kartlarını yerleşik fanları ve soğutucu kılıfları takılı halde tutar. Bu kartlar, AFAIK, PC kasasının içinde * olacak şekilde tasarlanmış, arkadaki havalandırma deliklerinden sıcak hava üfler. PC kasasının içindeki basınç daha yüksek olacağından (genel olarak, sadece fan performansı değil, aynı zamanda daha yüksek sıcaklık nedeniyle), GPU kartları * tasarlandığı gibi * hareket etmeli ve port ucu havalandırma deliklerinden dışarı hava üflemelidir.
Post
Topic
Board CPU/GPU Bitcoin mining hardware
Hardware Failures at reasonable temperatures?
by
catfish
on 11/09/2011, 17:07:01 UTC
Hızlı bir mola için İsviçre'ye gittim (bir haftadan fazla), evimdeki pencereleri ve kapıları kapatmak zorunda kaldım. Normalde, arka yedek yatak odasındaki madencilik teçhizatları, pencerelerden tamamen açık olarak soğuk hava alır. Bir grup ayakta fan havayı değiştirir ve konveksiyonla birlikte, GPU'lardan yayılan 3 kW ısıyı soğutmak için dış mekandan gelen soğuk hava gelir.

Tüm GPU'larda sürekli olarak, hash oranını, kabul edilen / reddedilen sonuçları, sıcaklığı ve son Phoenix sonuç mesajını izleyen bir web sayfam var. Beklendiği gibi, sıcaklıklar dış ortam havası olmadan * önemli ölçüde * sıçradı.

Endişe nedeniyle, birkaç gün içinde uzaktan giriş yaptım ve overclockları önemli ölçüde azalttım ve hiçbir noktada GPU'ların hiçbiri 84˚C'yi aşmadı. Bunun yüksek olduğunu biliyorum ve * normal * kurulumum kartları 75˚C'nin altında tutuyor. Sadece birkaç kart - 5830'lar - bunu ısıtın - gerisi genellikle 60'tır.

Bununla birlikte, overclock'u ve tempsleri asla * aptalca * yükseltmekten bağımsız olarak birkaç gün sonra birkaç kart kilitlendi.


Döndüğümde, sadece kutuları yeniden başlatmak mümkün olacağını ve her şey yoluna gireceğini beklediğimden. Durum böyle değildi. İki kulenin neden önyükleme yapamayacağını teşhis etmek için çok zaman harcadıktan sonra, bir 5850 (bir XFX Black Edition) ve bir 5830'un (overclock.co.uk'den bir 'değer' kartı) kırıldığını buldum.

Onları başka bir makinede de çalıştırmayı başaramadım - eğer PCIe soketine takılılarsa, işletim sistemi kartları inceler etmez, otobüsten asılı olan her şey kilitlenir. Özellikle, yerleşik Ethernet ölür, bu da daha fazla tanılamayı imkansız hale getirir (iyi, USB flash sürücüyü çektim, çalışan bir Linux kutusuna monte ettim ve günlükleri inceledim - ancak sistem günlüğünde yanlış bir şey yoktu).


Benim sorum - başka biri böyle donanım arızaları yaşadı mı? Radeon 58xx GPU'lar 110˚C'ye kadar çalışabilir, bu noktada hasarı önlemek için termal olarak kısarlar. '90'ın altında tut ve onlar iyi olacak' gibi yorumlar bile gördüm.

Evet, overclock yapıyorum ... ama çekirdek sıcaklık asla 84˚C'yi aşmazsa bu bir GPU'yu kalıcı olarak kırar mı? 'Bu çekirdek sıcaklık - RAM daha sıcak olabilir' - Her * kartı 275 MHz bellek saatine göre daha az hızlandırıyorum. RAM iyi olmalı, değil mi?

Radeon kartlarında başka hangi kritik bileşenler var?


Tabii ki, sadece kötü şans olabilir, ama değilse, bir daha olmamasını sağlamak istiyorum. Ölü iki kartı birkaç kartla değiştiriyorum.