The standard protocol is a commit and reveal protocol as mentioned above with hashes. The problem commit and reveal protocols have is that the last party to reveal can jam the process if he doesn't like the result. In some contexts thats harmless, in others its fatal.
The hash the last block's ID approach can be biased by miners. Without knowing what the the result would be used for you can't argue that they wouldn't do it... if they could make themselves win a 100 BTC lottery for sure, ... it would be totally reasonable to orphan and throw out blocks to pull it off. The earlier proposal to use "the last 64 blocks" doesn't help, the last block is sufficient-- it already commits to all prior blocks anyways.
You can attempt to reduce the holdup issue in a commit and reveal protocol by using verifiable secret sharing. For example: Alice, Bob, and Charley generate secret values a, b, c and using ECC compute and publish points aG, bG, cG -- these are their commitments. Then each party generates a new random value r and sends one of the other two parties r and the other party their secret-r, and those parties publish rG and (s-r)G. So for example, Alice sends r to bob, and (a-r) to Charley. Bob and Charley publish rG and (a-r)G, and each of them can add the values and check that they equal Alice's commitment because aG = rG + (a-r)G; but the players all still know nothing about each other's secrets. Once the sharing is done, people can start revealing their secrets. Alice reveals a, Bob reveals b... Charley decides he doesn't like the result and falls offline, but now Alice and bob can reveal the shares of Charley's secret... Whoever remains can compute H(a||b||c). Of course, if Bob and Charley are co-conspirators they can abort as soon as they get Alice's shares if they don't like the result. This approach generalizes to any threshold of participants.
These sorts of ideas can be combined.
But which combinations are interesting depends on the application. For example, one case I've thought a lot about is making random values that people in the future can be pretty confident weren't rigged.
Alice and Bob can agree on their terms and generates the a private key as a hash of the agreement terms and a random value sends a bitcoin to their own keys A and B in block 1000. Then after block 1000 they sweep their coins and reveal their keys and your random number becomes H(blockhash || a_private || b_private). This would make it difficult for the miner to bias without knowing A and B's keys, letting them steal the coins. A and B couldn't easily restate their random values because they're commuted in the chain, etc.
Another idea which I've not seen implemented is to use a slow sequential function on the block hash. So e.g. your randomness involves H(blockhash) but you make H in that case some function that takes 20 minutes to compute. You can argue that a miner might throw out a block solution to bias a result-- but if he can't even tell the result for 20 minutes after finding the block that is much harder. I understand that Ben Fisch has been working on finding functions for H() in this example which are cheap to verify, so others can check the results of that 20 minutes of computation without doing 20 minutes of computation themselves.