It's interesting to see the two different sides of this debate. Is 0.8 the problem because it doesn't conform to the established rules of the system or are the older clients the problem because they use a crappy database which can't handle large blocks?
From what I can gather it seems to me like the database used by 0.8 is much more superior, so it will be used. But in order to patch 0.8 it seems like an artificial cap will be placed on the block size so that 0.8.1 is compatible with older versions.
Obviously this was the best solution to apply and I'm glad the hard fork route was not taken. Such a route needs to be properly planned and even then it will be difficult to coordinate a change such as an increase in the max block size.
But do we really need to increase the max size of blocks? I don't know all the technical details behind bitcoin, but it seems to me that if 1 block is being saved into the blockchain every 10 minutes then the max size should be limited to something relatively small.