Since the photo is of the dice, and the dice are random, then the values and arrangement of the dice in the photo would be random. Therefore, the entropy in the photo should be significantly higher than the entropy of the dice values themselves (since values of the photo would also take into consideration the location and orientation of the dice as well as the background).
Of course that assumes that the photo has enough detail and clarity to make out the values of all the dice.
I am sorry but that just makes no sense, the information entropy of a number is raw entropy with 0% noise/signal ratio.
When you consider a picture, which has bits of repeating colors all over the place, that is just not raw entropy anymore, because the bits of the colors are repeating. I highly doubt that is high quality randomness.
https://s29.postimg.org/63h7ciytj/images_duckduckgo_com.gifMaybe if it's just a black/white binary image of random 0 and 1, but then that image is already 100% descriptive of the information contained in itself.
But just the dice itself, which is not even in the same language, I doubt it. It's the bits that are the language, and repeating bits are not a good quality.