Post
Topic
Board Bitcoin Discussion
Re: Why do average people run full nodes
by
rackcityb1
on 01/09/2014, 20:31:39 UTC
I know that it helps the ecosystem. It verifies transactions, keeps a full copy of the blockchain, etc...

But it consumes a lot of ram and CPU cycles.
I run Bitcoin Core (with limited incoming connections) and Armory. It takes a big one-time download that took a few hours, ~50GB of disk space (with plenty free), ~650 MB of RAM out of 16GB, ~1 CPU hour out of 400 (based on current uptime and CPU time usages), and minimal bandwidth requirements. The last few of those I can pause any time I want to reclaim the extra resources.
In return, I get to use a good, secure client (Armory) that's connected to the network independent of any external service or undue reliance on peers (to tell the truth about the state of the blockchain, or to protect my privacy). And having powerful local clients, instead of overly-simplified ones, helps me learn more about the technologies behind it. I also like helping secure the network.

For me, that's an agreeable trade, so I run a full node. For some people, the requirements are relatively larger, and the rewards are less important to them, so the balance does not tip in the "run a full node" direction.

Self-interest can, in fact, be sufficient, including in my case. Altruism is a small part of why I run a full node, but is not sufficient nor necessary in my case.

And I know how average users can hurt the network. If we had fewer average users, I'm sure I could bump up my max number of connections substantially. I have to keep it low because I'll occasionally have someone want to download a huge number of blocks from me, and I have little upload bandwidth, so it interferes with anything else I'm trying to do.
Just a quip -- you can run armoryd/bitcoind and limit bandwidth consumed to ~8kb/s down, 2kb/s up with it still being fully functional and up-to-date (req's should be significantly lower than that, but I haven't checked in a while and wanted to be conservative) using an application-level bandwidth throttler like NetLimiter.
I would think that these limits would prevent you from relaying unconfirmed TXs as your node would spend much of it's time downloading recently found blocks (if the average block is close to the limit).