I understand that some game theory uses this definition of rationality, but I find it to be a misnomer. It's really just financial optimization. If a psychological need is valued more than a fixed monetary amount, it's not illogical to prefer the psychological satisfaction.
What if you consider that the utility measure already accounts for all these psychological factors? So the numbers in the matrix don't represent raw dollars, but an accurate estimation
1 of how happy would it make you, in relative terms, to pick that option, all things considered. Then rationality is accurate enough a name for optimization of that, isn't it?
That is formally equivalent to calling the utility money and assuming (again, for the sake of mathematical simplification) that that is all you care about.
1: Don't ask me
how would you go about attributing a scalar value to the hairy mess of actual human preferences and biases, though. Game theory doesn't even try to do this.
Most economists retreat away from basing measurements on cardinal utility, instead opting for ordinal utility. That being said, assuming (very big assumption) that you could accurately measure cardinal utility (assuming this existed) of all potential choices, and prove that an agent (person) chose an outcome that was suboptimal in utility, then you have to essentially explore the possibility that human action lacks an element of free will. Or redefine the idea of preferences.