This is a valid point and for once in this debate can lead to a constructive suggestion. If the choice is between contributing 350GB for a full node and closing the port effectively contributing nothing and one has a 300 GB ISP cap then closing the port becomes the only option, even though the node could very easily contribute say 150 GB. There is a very disturbing all or nothing approach to this debate that leads to a choice between two bad options at both extremes.
When I was running a node with an open port I managed to keep the bandwidth down by various means, such as limiting the number of connections and implementing bandwidth caps on my upload bandwidth. I tried to regulate things so that I was uploading about 1.5 x the amount I was downloading. I believe the most I ever used was about 100 GB/month combined upload and download. (I ceased running an open port following two massive DDoS attacks which took down my ISP.)
From a little watching of my connection activity, I observed that a lot of the upload bandwidth was loading older blocks. A limit on bandwidth provided for uploading older blocks would certainly help. With some creative effort it might be possible to devise a scheme whereby full nodes could recoup some of their operating costs by charging leechers for services rendered.