Whereas, with a quantified probability of traitors (e.g. hardware MTBF), the risk of Byzantine fault is computed. Which was the intent of Lamport et al's paper.
That's not really the case. Read the paper more carefully. Simple probabilistic hardware failure is easy to cope with using redundancy and majority voting. The hard problem is failures that are more subtle and complex, which can mimic deception and collusion.
The algorithm becomes a tool in a toolbox which is used to improve robustness against certain types of failures, but the robustness is still never absolute, and in real systems the actual probability of failure is still not known.
I suggest you also read the paper more carefully. Specifically Section
"6. Reliable Systems" which we are referring to.
What it says is that as the hardware fails the outputs can become like traitor inputs to other hardware components causing the cascade to lie, which is precisely the BGP problem and what the solution is modeling by a count of traitors (
passing along a traitor's lie doesn't create a new traitor). Even in the case where the derivative computation is corrupted due to the corrupted input, this is still a quantified probability of cascade of traitors obtainable from engineering and math/models applied from hardware MTBF rates. It is more exact science or estimation than not knowing. There is no decentralization, Sybil attacked introduced which otherwise makes the estimation highly unknowable and unmeasurable (science requires measurement to validate that models are predictive).
The examples in the paper are toy examples. Now consider a real system with many interconnected computers each running million or billions of lines of code. Passing along a lie does not create a new traitor, but responding incorrectly to an unexpected input does create new traitors. So it is very difficult to ever know how many Manchurian Candidate traitors exist, ready to be triggered.
Of course you are not omniscient to know this can't be modeled in any applications of the solution. I am quite confident models apply in real world use cases.
Obviously Turing complete (unbounded recursion) outcomes can't be decidable, but dependently typed systems do exist.
Perhaps mission critical hardware controllers, routers, etc..
Byzantine fault tolerance is used because it allows robustness against complex failures to a greater degree than simple majority voting, even when the components are not simple bits of hardware with an easily-quantifiable MTBF (which are often bullshit, BTW).
The Byzantine use case applies when ever there is redundancy of components that form a circuit, but the MTBF of those nodes of the circuit still applies to models of cascaded failure. Byzantine analysis tells us limits on this cascaded failure w.r.t. to the redundancy.
Manufacturer MTBF may be marketing BS but ConsumerLabs (i.e. independent verification) can compile third party stats.