It's complicated by the fact that with 1 trial, there is a 0% chance that the house will take within 1.89 to 1.91 percent.
The chance is not 0%, but it is very small, perhaps <<0.001%.
Markov chains are more useful when there's a relationship between the states of the system. In this case, it would be more like "losing 3 in a row changes your chances of winning the next one". Since we don't have that, you can use regular IID statistics.
It seems to me that SD could be modelled as a markov chain since the game is stochastic and has a markov property; that is, the outcome of trial B is not dependent on the outcome of trial A (I'm not sure if that's what you meant?). Although the probability of consecutive lessthan1 "successes" is very remote, it is still possible and a probability is associated with it. Given infinite number of trials, its bound to happen and there is no "losing 3 in a row changes your chance of winning the next one" for the markov model because the game has a markov property (or maybe it doesn't?). Maybe I'm misunderstanding the application of markov models, so please correct any errors I've made.