So, if in this process we determine the average time to solve a block as a pool is 1 minute.... And we determine for a given miner the average time to solve a share is 45 seconds... What are really saying?
We are saying that about the same amount of times that the pool solves a block in 30 seconds, they also solve it in 1 minute 30 seconds.
We are also saying that about the same amount of times we solve a share in 30 seconds, we do so in 60 seconds.
Block [0=====15=====30=====45=====60=====75=====90=====105=====120]
Share [0=====15=====30=====45=====60=====75=====90]
So here's a way to think of it - the block solve will fall somewhere along that range, and the share will solve somewhere along its range, at random.
I hope what you're trying to say is that p
share(t)=const. and p
block(t)=const., this is correct. But to represent that, your scale would have to go all the way to infinity.
On any given go around, i could easily solve a share first, in fact in this case that will happen more times than not. But MANY times, the block will be solved first, and I will get nothing. This is the loss we are talking about. The work I did means nothing, it is not counted by the pool and credited as such. This loss is present in every crypto currency, but it doesn't really become a serious issue until you have these coins that are so easy that we find blocks in seconds and minutes.
But, we can combat this with a smaller diff. Imagine the same scenario, but I lower my diff such that I solve a share on an average of 30 seconds.
Block [0=====15=====30=====45=====60=====75=====90=====105=====120]
Share [0=====15=====[30]=====45=====60]
Now, you can imagine again - numbers falling randomly on these scales representing the time it takes to solve a block vs a share.
This time, you can visually see its much more likely that the share time will be less than the block solve time.
This is your fallacy. Changing difficulty would do nothing about that (in your model). The probability to find a share at each point in time would go up, by the same amount everywhere. By your logic, rejected share percentage should be 100%, since most of the scale is to the right of the found block (found block to infinity), which is infinitely more.
But there is another flaw. In reality, As soon as there's a new block, you will start looking for solutions to the new block (with some delay of course). So there is zero chance of you solving a block after you received the broadcast for the same.
What actually happens when your share is rejected, is that your share was found in the few (milli)seconds before you actually received the new block broadcast. It is only due to the lag in network.
To use a similar visualisation:
x = pool/you searching for block X,
s: you find a share
X = block X found/you receive broadcast
Pool server [xxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyy]
Broadcast [-------------------X-------------------]
<--+--> Total network latency (round trip)
You [xxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyy]
Broadcast [----------------------X----------------]
Share [-s---s---------------s---------------s-]
Any of your shares that is found within the arrows will be dropped (since your share will arrive at the server after it knew about the block)
So essentially, the percentage of rejected shares should be about: t
latency/t
block.
So actually, lowering share difficulty might have the exact opposite effect to the one you want. If it substantially increases load on the server, latency might go up and therefore rejected shares will also go up.