In the traditional formulation of utility and social choice a rational agent is imagined as having a preference matrix over all possible states and the "Social Choice" being the choice which optimizes for a group of agents across all preference matrices to achieve a utopian choice or something.
But a very important a common way that people are irrational is altruism and spite. That is, my utility function is a function of someone else's utility function and ... What's important here for modeling is if you have a group of people, everyone's utility functions/preferred orderings will be systematically correlated by a partial differential equation. Furthermore, altruistic and spiteful behavior is dependent on previous relationships over time, expectations for the future, and system design.
For a 1-shot question this doesn't really change anything and was already priced into many models.
But if you are trying to design a system to fairly distribute a good. Say, government distributing public goods, there are some huge modeling effects to consider. If a government gives out 1 widget there is utility update for:
- the intrinsic utility of receiving that widget
- update everyone else's utility function/preference orderings based on altruism or spite.
- Everyone modifies strategic thinking which changes observed behavior.
Let's consider this example of pay-outs to agents <A, B, C>
<10, 10, 0>
<10, 0, 10>
<0, 10, 10>
<5, 5, 5>
<0, 0, 0>
If the agents are rational then there is a Condorcet cycle.
With altruistic agents (depending on the altruism parameters) there is one Condorcet winner or a cycle from every agent wanting everyone else to have more.
With agents who are strongly mutually spiteful <0, 0, 0> is a Condorcet winner.
So if you internalize spite/altruism into your social choice model, irrationally (or rationally?) spiteful agents might willingly choose nuclear war as the efficient social choice.
I think from a system design perspective it is important to treat utility/preference orderings from the thing being distributed differently from the second order effects of how people feel about each other's feelings. One important finding here is that altruistic/spiteful behavior is a strategy trying to selfishly optimize for a reward. Altruism and spite are learned and evolved behaviors which are evolutionary stable strategies and even learnable by perfectly rational agents. They key is they only make sense in repeated games and in the context of a meta-strategy against other agents.
Spite/altruism can also be artifacts of conscious strategic play. Disproportionate retaliation and 3rd party retaliation of non-compliance are bargaining strategies that exist even without voting. Sending a message can be a utility-maximizing strategy in some contexts.
I guess there are some complex 2nd and 3rd order effects of switching from rational agents to real people.