To be honest the difficulty is more of less random. Even with a stable network hash rate you could find 2 blocks back to back in under 10 secs if very lucky. Since the re-target rate is so short the diff will make big jumps.The diff changes due to the time between blocks. So while one block was solved in 1 min and the next in 10 sec then the diff will do significant jumps. Where are you seeing the nethashrate being around 300Mhash? I haven't seen it go above 200Mhash in the last week or so.
Now that I think about it longer, it may have been that I saw a network hash rate of 210 or 220Mh/s with the pool doing over 25Mh/s.
So you are confirming that the diff calculation is very sensitive to the last block finding time, which is a random process of luck, so that as a result the diff will jump around and the block finding time might increase or decrease, but since the max decrease is 40 sec below designed (40 sec) while the maximum increase is extremely large (minutes or hours), the result of jumping diff is a much higher average block finding time, since (for example) at a standard diff of 3 a block may suddenly be found after 5 seconds so in 1/8 of designed time, which could lead the re-target algorithm to simply (erroneously) multiply the diff by 8 in an attempt to get the next block find time 8 times 5s.
However, since the diff of 3 was actually giving an avg 40s block finding time and it was luck that a block was found after 5s, the new diff of 3x8 = 24 will result in an average block finding time of 8x40s = 5 min 20s. Hence, the upward correction of diff should be dampened to avoid these overshoots, since a higher diff causes much more time increase than lower diffs causes time decrease.
Since a switching pool that turns on and off on a per-block basis cannot be prevented with a diff that is calculated before the block starts, there is no reason to make a very aggressive "attack" (increase) of diff that will just hurt all steady miners by a much higher block finding time. So, it is better to dampen the diff to respond reasonable fast to sudden hash rate changes (within a couple retargets) but there is no need to re-target to a completely new value every time. For example, the diff could be limited to change no more than doubling or halving during retarget, which will still allow a very large change in a few steps, but not the large jumps we have seen. If the concern is to quickly recover from a major drop in network hash rate, then the re-target algorithm could make an exception for a long time since last block found to re-target fresh immediately, while when block are found regularly to use a more smooth re-target. Reducing jumps will allow the avg block find time to be closed to the designed 40s time.