It's pretty curious that I am talking about how we need to increase bitcoin block size while others come up with a solution to decrease the amount of storage we need to occupy to support bitcoin.
Do you have full archival node? Because if you do, then I wonder how many times it crashed, and you had to reindex it. Because for example copying 500 GB of data from one disk to another is not a big deal. The bigger problem is verification time, where you can spend a week on reindexing the chain, and rebuilding the database from scratch, even if all blocks are already downloaded.
Do you use really old device or low RAM capacity? Last time my full node crashed, it took about 1-2 days to re-index where the blockchain data is stored on HDD and i allocate big amount of RAM for Bitcoin Core.
Do you have full archival node? Because if you do, then I wonder how many times it crashed, and you had to reindex it.
if you are worried about corrupted data, you should invest into some redundancy with multiple disks. zfs has self-healing capabilities with snapshots and lvm has snapshots too. take snapshots of the bitcoin data folder at some intervals and you can return to previous state and restart bitcoin with no reindex.
ZFS is good file system, but isn't it less suitable for external disk (OP mentioned he use that) due to ZFS pool management?