Include randomness to the processing so its harder to pinpoint which nodes do what and to some degree when.
That's pretty much a necessity. The stirring of the mix brought by Mr Spread's innovative notion of creating an internal market for service node ownership is likely to make it more difficult to mount an eclipse attack [1, 2]. (apologies for PDFs, haven't had time to write up a
tour d'horizon).
Interesting intro to paper [1] which references some of the issues pertinent to the operational aspects of running an overlay network (which is, apparently, the accepted architectural term for what we've been calling 2-tier).
Overlay networks facilitate the deployment of distributed application functionality at edge nodes without the need to modify existing network infrastructure. Overlays serve as a platform for many popular applications, including content distribution networks like BitTorrent, CoDeeN, and Coral, file-sharing systems like Gnutella, KaZaA, and Overnet/eDonkey and end-system multicast systems like ESM, Overcast, NICE, and CoolStreaming.
Moreover, a large number of research projects explore the use of overlays to provide decentralized network services. Robust overlays must tolerate participating nodes that deviate from the protocol. One reason is that the overlay membership is often open or only loosely controlled. Even with tightly controlled membership, some nodes may be compromised due to vulnerabilities in operating systems or other node software. To deal with these threats, overlay applications can rely on replication, self-authenticating content, Byzantine quorums, or Byzantine state machines to mask the failure or corruption of some overlay nodes.
[1]
http://www.eecs.harvard.edu/~mema/courses/cs264/papers/eclipse-infocom06.pdf[2]
http://eprint.iacr.org/2015/263.pdfCheers
Graham