Maybe I'm wrong about this, but... the difficulty level tells how hard it is to find a block; from this we can also calculate expected interval between finding blocks. But this assumes that the network hashrate remains constant since the last difficulty adjustment. In reality, the expected mining returns would trend more closely with network hashrate, especially if the hashrate increases significantly.
Correct?
Its the other way round. Difficulty is a number that determines what the odds are to find a block with a given hashrate. Average time in seconds is: difficulty * 2^32 / hashrate.
Difficulty is tuned so the entire network on average will find one block every 10 minutes. But if the overall network hashrate is bigger than before the previous adjustment, more blocks will be found than once every 10 minutes and so overall block generation time goes down, which will cause the next difficulty to rise.
This is also what I understand, up to this part.
But your individual block generation time for your particular hashrate remains exactly the same until the difficulty changes.
Here's where my understanding differs.
1. Block generation is probabilistic, so the generation time varies due to many factors. <-- correct?
2. When someone else publishes a block, all work we've previously done is wasted and we need to base our calculations on the new block. <-- half correct?
3. In addition, we wasted a small amount of effort (spent calculating on old data) that is proportional to latency <-- correct but usually insignificant?
4. Therefore, we are competing against the entire network hash, and our expected returns inversely varies closer to network hash than difficulty. <-- this is what i'm unsure about
5. Even if the above reasoning is wrong, a thought experiment shows that the conclusion must be correct. If network hash rate somehow doubles right after a difficulty adjustment, it is clear that our expected return will approximately halve even before the next difficulty adjustment.