Committing 10k rolls at a time may go against the atomic consistency of the DB resulting in unexpected effect to the bankroll (+ve/-ve).
The bulk insert is applied atomically, but each inserted row still has to respect its constraints.
One big reason you'd want to accumulate rolls in-memory and then commit them to the DB in bulk is if each roll needs to be applied sequentially to the DB and you don't want sequential roll insertion to be the bottleneck.
For example, imagine if each roll needs to calculate the total house bankroll to
adjust the edge of the roll. The bankroll changes each roll. If each roll has to hit the DB server, then you're going to be dealing with a lot of insert contention.
Instead, let's say you only make one DB insertion at the turn of each second. As rolls come in, you buffer the results in memory (assured that the DB cannot change during this time) and then commit all the rolls at once.
Here's an idea: imagine if you write a function `roll(prevDB, params)` that returns `newDB` where prevDB and newDB are just associative datastructures that represent the state of the DB in memory.
That means you can `reduce(roll, prevDB, [params, ...])` where `[params, ...]` is a sequence of user roll parameters coming down, say, a websocket.
At the turn of each second, you stop the reduction, take the latest newDB value (the result of all the buffered rolls), and commit it to the DB. The post-commit value of the DB is now your `prevDB` that you feed back into the reduction and resume processing rolls.