As an approximation, as the "average" verification+propagation delay decreases linearly relative to the block period (block period is enforced with PoW difficulty), then the orphan rate and probability of being an uninformed node or on an orphaned chain (at some z) decreases exponentially.
It's certainly the case that, during nominal operation, latency is a driving factor in orphan rate, especially as the block period decreases. It's easy to see why because the easier it is so solve a POW, the more miners will build on the same block, causing orphans to be more common as the same data structure gets updated simultaneously by different miners.
However, this is nominal operation, not an attack scenario. It does not follow that statistically less orphans means better security, simply because an attack is not nominal operation.
So in my design the mathfor choosing the longest chain to mine oninclude the calculations about what is statistically fraudulent.
Taking a guess here, I suspect you have two classes of miner? Type A is the professional miner expending a lot of energy to produce chains of work and type B is the every day user sending transactions already mined by including a POW with the transaction. So the key becomes how to make sure that you cannot impersonate type B miners to throw your new chain selection rule out of whack, since their POW difficulty must be trivially easy to solve.
You'll need to prove that the type A miner (with say 1M x the hashing power), cannot have 1M x the influence over the chain selection rule (by, say, impersonating 1M type B miners), otherwise this will collapse to being equivalent to regular longest chain selection rule.