I just went and measured. In the wake of the mess of power cables, airflow is around 2.3 m/s. In a similar position above, not in cable wake, airflow is about 3.8 m/s. If you feel around with your hand, the effect is quite distinct. There is no noticeable effect from ethernet cables, though.
The performance and reliability of most servers is not significantly affected by airflow resistance. If a server gets a little bit too hot, it increases the fan speed. With SP30s, the fans are running full blast anyway, and the performance is affected by internal temperature. I would think that this would be a bit more sensitive than MTTR/MCTR for some 10 kW racks.
Imagine what you could achieve if you actually opened the Spondoolies' cases and blew the air over their radiators using large centrifugal fans. Obviously, don't do that with customer's equipment, experiment with your own. I understand that as a prepaid hoster, you care very little MTTR/MCTR, downtime isn't hitting your pocket. Also, the expected useful lifetime of Bitcoin miner is much shorter than a general purpose server, so you may in fact see very little failures before your customers liquidate their equipment.
I've posted couple of months ago on the Spondoolies' thread regarding just that:
Just take away the external metal casing. Flip the machine on its side. Borrow a good centrifugal "air mover" from a neighborhood water damage repair contractor and some baffling that they use to direct the air. Also borrow a contact-less thermometer to understand the why the SP10 casing is badly designed for cooling and creates unnecessary temperature gradients. I don't know if Spondoolies' firmware has a "seized fan" shutdown programmed in, so you'll have to experiment with which fans can be removed.
The alternative is just to dress in your best cold weather clothes and photograph yourself near your Spondoolies' machines. You'll have a nice memento.
I haven't used Spondoolies' hardware personally, but I do have relevant experience of restarting bankrupt batch data processing facilities filled out with racks of 1U and 2U hardware from Dell and Sun. It had the same symptoms: the bottom was getting hot and the intake air had to be really cool. Neither Dell nor Sun field service technicians were giving us any trouble after seeing our temporary facility w.r.t. warranties and service contracts. We've actually lowered the rate of faults due to seized fans and accumulation of dust and debris. Only hard drive failures had increased.
The physics of it is really simple: it doesn't make sense to first concentrate the heat only to dissipate it right afterwards. But my point of view is different than yours, you are just providing the hosting service, whereas I talk like end-user minimizing the overall costs. Your customers obviously do care about the appearances, whereas I cared much more about things like avoiding downtime, minimizing staff workload, maximizing their morale and nearly zero about how it looked.