Recently I have been interested in the method "Rule X," introduced in this paper as well as section 3.5 of this paper. There are rigorous definitions there, so in this post I'll try just to give intuition and informal descriptions.

I will describe how it works on approval ballots, then how I have attempted to extend it to score.

The basic idea is: each voter starts with $1 (or just voting power 1, but money is a more fun metaphor). A candidate costs a Hare quota $*q* to elect. We can interpret an approval from a voter as willingness to spend their remaining budget to elect that candidate. Rule X sequentially chooses the candidate who can be purchased for $*q* at the **lowest** uniform price (not exactly uniform since some voters may exhaust their entire budget).

It's quite similar to, for example, Sequentially Spent Score, so let's compare the differences (using approval ballots!). Say the quota is $20. There are 35 supporters of candidate A who indicated a willingness and ability to pay $28 at $0.8 each. There are 30 supporters of candidate B who indicated a willingness and ability to pay $30 at $1 each.

SSS would choose B on this round, since the indicated demand is higher. Rule X would choose A on this round, since the price per supporter is lower. For each method they remove $*q* from the budgets of the supporters of the winner, but they also do this in different ways.

SSS treats a voter's remaining budget more like a *multiplicative* deweighting (yes I know this is handwavey), and subtracts a fraction of the $*q* in proportion to that voter's indicated willingness and ability to pay. On the other hand, Rule X treats a voter's remaining budget more like a *subtractive* deweighting, and chooses to subtract uniformly the smallest price possible such that if all the supporters pay that (up to their remaining budget) it will equal $*q*.

The reason I am interested in this rule is its excellent axiomatic characteristics. It satisfies Extended Justified Representation (EJR) and is a logarithmic approximation to the core.

I have extended Rule X to scored ballots with the following principle: when a voter gives a candidate a score of *s < 1*, then all payments made by that voter to elect that candidate should be made with efficiency *s*, so if a voter has scored a candidate *0.3* then for price set at *0 < p <= 1* they will spend *0.3p*. I'll illustrate this by way of example, but I can draw up a formal definition if the example does not make it clear enough what I'm doing.

- There is a coalition of 22 voters for candidate A with $0.6 remaining who scored A as 0.7
- There is a coalition of 31 voters for candidate A with $0.35 remaining who scored A as 1.

Say the quota is again $20. Then the price *p* is the solution to *22*0.7*min(p, 0.6) + 31*1*min(p, 0.35) = 20* so the price will be set at *p = $0.458*. The coalition of *31* voters will be exhausted and the coalition of *22* voters will spend *0.7*p = 0.32* each to be left with $0.28 remaining.

If there is no other candidate that can be purchased for a price of lower than *$0.458* then A will be selected and the ballots will be spent as above.

You may ask "what if no candidate can get a full quota of demand?"

This is a good question. What the authors of the method suggest is just to give every voter a little money uniformly until some candidate can be purchased (this is seq-Phragmen). Alternatively, it could just iteratively selected the highest-demand candidate and completely exhaust the budgets of its supporters. Note that this is exactly what SSS and Allocated Score do, so they run into the same problem of 'unaffordable' candidates, just it is a little harder to see.

There is already significant discussion in those papers about Rule X in general. Of course, if there are glaring issues with the approval variant that would be important to hear, but what I am more interested in is an evaluation of my attempt to extend it to score ballots.

- Does the scored variant have any holes in terms of quality or strategic behavior?
- Is this the most natural way to extend Rule X to scored ballots or is there another that makes more sense?

Looking forward to hearing thoughts, and again if my informal definition-by-example is not sufficient I am happy to provide pseudocode or a real formula.

]]>Consider this 5 winner example with clones for each candidate

Red: 61% vote A:5, B:3, C:0

Blue: 39% vote A:0, B:3, C:5

RRV Gives ['A1', 'C1', 'A2', 'B1', 'B2']

MES Gives ['A1', 'A2', 'A3', 'C1', 'B1']

SSS Gives ['A1', 'B1', 'B2', 'B3', 'B4']

Allocated score Gives ['A1', 'B1', 'A2', 'B2', 'A3']

STV Gives ['A1', 'A2', 'A3', 'C1', 'C2']

The code I ran (which I can share) was taken from the electowiki page written by the inventor Piotr Skowron and I confirmed with him personally that the result was correct. I suspect you have a bug.

]]>Here is a proposed proportionality criterion possibly satisfied by SSS (and if so, probably MES as well). It intends to look at proportionality of utility rather than proportionality of ballot weight. I have adapted slightly the definition of FJR to make it more PJR-like as I explained in my prior post. I think the partial cohesion framework makes a lot of sense to use when talking about proportionality of utility.

Say there is a coalition of voters S such that |S| is at least M quotas. Say for (integer) beta <= M that there is a set T of M candidates such that u_i(T) >= beta for all i in S. Then for any winning committee W, it must be satisfied that sum_{c in W} max_{i in S} u_i(c) >= beta.

Note that this is still a rather weak condition and does not guarantee utility-efficient committees. One way to strengthen it might just be to use FJR and instead demand that max_{i \in S} u_i(W) >= beta.

Is either criterion satisfied by SSS or MES? Any quick counter-examples?

**Edit**:

Please feel free to check my arithmetic, but I believe this example shows that MES satisfies *neither* criterion above, even when it is exhaustive and does not return early. This is obviously a pathological scenario, but nonetheless it is one where MES returns clearly the 'wrong' result.

5 candidates (A, B, C, D, E). 3 to elect. Quotas give scores

q_1: (1, 0, 0.01, 0.01, 0.01)

q_2: (0, 1, 0.01, 0.01, 0.01)

q_3: (0, 0, 1, 1, 1)

Since every voter in the coalition of quotas q_1 and q_2 gets utility 1 from {A, B}, then the criterion would demand that at least one voter gets utility 1 from the winning committee (alternatively, the weaker condition that the sum of max utilities in the coalition for the winning candidates is 1).

**Round 1**

Ballot weights: q_1 = 1, q_2 = 1, q_3 = 1

elect C for a price of 50/51

**Round 2**

Ballot weights: q_1 = 101/102, q_2 = 101/102, q_3 = 1/51

elect D for a price of 2500/51 (around 49.02)

**Round 3**

Ballot weights: q_1 = 1/2, q_2 = 1/2, q_3 = 0

elect E for a price of 50

So the winning set is {C, D, E}. The utility of any voter in q_1 or q_2 is 0.03, far short of 1, so the criterion is violated. On the other hand, SSS, AS, STV all return {A, B, C}, the 'correct' winner set.

Unfortunately, it's not true for SSS either. Consider the example where there are 2 voters and two candidates to elect. Voter 1 submits (1, 0.99, 0) and voter 2 submits (0, 0.011, 1).

]]>It's a bit like perfect representation, except that perfect representation makes the demand for fractions of candidates. (E.g. if it's possible for each voter to be assigned their own unique 0.05 of a candidate - allowing e.g. for 0.025 each for two candidates - then this must happen.)

A similar criterion would be that for each new candidate you add to the committee from 1 upwards, they must be assignable to a different voter, until every voter has their own candidate (assuming the ballots make it possible). It would then continue with each voter getting a second candidate in turn if even more candidates were added.

Thiele-based methods fail this.

]]>For example, the SSS can fail this version of JR because a group does not agree on full support for any candidate. It comes from the extension of JR to score. I think of PR in the score case more like how using the Kotze-Pereira transformation then approval JR works. You need a quota of voters giving full support or 2 quotas giving half support. I think in terms of quotas of score not quotas of voters.

MES allows a voter to buy a candidate for more than their score/utility given on the ballot if rho >0. This violates what I call Vote unitarity. What would MES do if you limited rho to be less than 1? That would be much more similar to SSS but might be broken. Consider the pathological scenario for MES.

Candidates = ['A','B1','B2','B3','B4','B5','B6']

1 * [[1,5,0,0,0,0,0]]

5 * [[1,0,5,0,0,0,0]]

5 * [[1,0,0,5,0,0,0]]

5 * [[0,0,0,0,5,0,0]]

5 * [[0,0,0,0,0,5,4]]

5 * [[0,0,0,0,0,3,5]]

Winners = 5.0

I submit that A should not be elected but MES selects it. This is not an unrealistic example either. Allocated score can do similar things. It comes from the sort and I have long complained about whole ballots being allocated for a score of 1. For the MES case it is related to candidates being just short of a full quota.

A related issues with MES is that it needs a completion method and they chose bloc score. I think SSS would be a better "completion method" for MES instead. It would avoid situations like

['A','B1','B2','B3','B4','B5','C0','C1','C2']

39 * [[0,5,5,5,5,5,0,0,0]]

18 * [[0,0,0,0,0,0,5,0,0]]

17 * [[0,0,0,0,0,0,0,5,0]]

16 * [[0,0,0,0,0,0,0,0,5]]

7 * [[5,0,0,0,0,0,0,0,0]]

2 * [[5,0,0,0,0,0,0,0,0]]

Winners = 5.0

MES returns ['B1','B2','B3','B4','B5'] but with the SSS completion you get ['B1', 'B2', 'C0', 'C1', 'C2'].

Anyway, single examples are not really great since all systems have problems. We need to simulate the three with strategy.

]]>https://arxiv.org/abs/2112.05193

It is basically the same kind of idea of Justified Representation, but made about as strong as it gets.

It requires that *every* voter who is part of an f-cohesive group must have at least f winners in the committee. I think this is a cool thing to consider because

- On the surface, it sounds very desirable (although it is not always possible).
- Like the notion of "balanced stable priceability" (BSP, a tweak on core stability) it is in some sense the strongest possible demand for a certain philosophy on proportionality
- Despite the above two, it is
*strongly incompatible*with core stability in that there is an election where the committees providing core-stability (and thus also any BSP committee) is*disjoint*from the set of committees providing IR.

edit: I know this is a lot of acronyms so I'll provide a synopsis for those who don't have as much time to waste reading papers as I do:

Say 5 quotas of voter (say, S) all agree on 5 candidates (say, T). Then

- Justified Representation (JR) says at least one voter in S should approve at least one winner
- semi-strong JR says every voter within S should approve at least one winner
- strong JR says one of the candidates in T needs to win
- Proportional JR (PJR) says the voters in S need to approve at least 5 winners total
- Extended JR (EJR) says at least one voter in S needs to approve at least 5 winners total
- Fully JR (FJR) is the same as EJR with a fractional relaxation of the definition of 'cohesive,' so if instead of unanimity we have 5 quotas of voters S all approve 3 out of some set of 5 candidates T, then at least one voter in S approves at least 3 winners.
- Individual Representation (IR) says every voter in S needs to approve at least 5 winners total

In this light, you might think of IR as semi-strong EJR. I'm sure you could also define strong EJR (i.e., all candidates in T need to win) but this would be an incredibly restrictive criterion. Interestingly, you could also define FJR with a PJR flavor (voters from S approve at least 3 winners total), or semi-strong (voters from S approve at least 3 candidates each) or strong (3 candidates from T are elected). I haven't seen these notions in the literature yet and I'm not really sure how they would compare to the other ideas, but it's definitely on theme.

Core, BSP, and PR are defined in other ways that are not as easy to summarize, but here's a shot at it:

- a committee W is core if there is no coalition of voters S comprising at least f quotas, and can propose f winners T such that all of S prefers T to W
- BSP means voters can be fractionally assigned to winners they support in W such that every winner gets the same total ballot weight assigned, each voter assigned to that winner spends the same fraction of ballot weight, no coalition has enough remaining support to elect a non-winner, and it is stable in the core sense.
- Perfect Representation (PR) means that voters can be partitioned into quotas such that each quota is unanimous for a winner.

Now, the incompatibilities above mean that it will be impossible to satisfy all of these criteria in every circumstance... but there may very well be an algorithm out there that either simutaneously approximates BSP & IR, or one that is guaranteed to provide either an IR or a BSP committee, when at least one exists. Of course if neither exists then who knows what the right committee is, but I think here there be dragons.

]]>Consider this 5 winner example with clones for each candidate

Red: 61% vote A:5, B:3, C:0

Blue: 39% vote A:0, B:3, C:5RRV Gives ['A1', 'C1', 'A2', 'B1', 'B2']

MES Gives ['A1', 'A2', 'A3', 'C1', 'B1']

SSS Gives ['A1', 'B1', 'B2', 'B3', 'B4']

Allocated score Gives ['A1', 'B1', 'A2', 'B2', 'A3']

STV Gives ['A1', 'A2', 'A3', 'C1', 'C2']I could have made a calculational error but I did it with code which I can post if people want to look for bugs. If correct this is super interesting. They all give different results.

Which sets are in the core? If any?

Just out of interest, I worked this out with COWPEA + KP and got the following percentages (assuming I calculated correctly):

A: 43.1%

B: 34.3%

C : 22.6%

This would probably mean 2 As, 2Bs and a C in a five-seat constituency (the RRV result). Which I think you consider to be not a great result.

]]>but the results are:

AAABC: 0.00018

ABBBB: 0.247

AAABB: 0.186

AAACC: 0.00042

AABBC: 0.00011

For context, the var-Phragmen objective tries to minimize the variance of the 'voter load.' It is equivalent to Sainte-Lague when voters vote along party-lists. There is a stripped-down (much easier to compute but less useful information) version of this metric sometimes referred to as 'Ebert cost.'

]]>through scorevoting.net

That url makes me shudder ...

]]>If we didn't include STV, we'd have the same problem, just with different entry points.

]]>When you gave the results for your example in the post a few above you said RRV, but do you mean that you actually used SDV? Edit - In any case I don't think RRV is the method worth calculating results for.

It was RRV. The only good thing about RRV is that it is simple to calculate. It was included to be illustrative. I think STV is garbage but I included that too as a reference point. I think DSV is the best Thiele system but I do not like Thiele systems so I never bothered to code it.

@toby-pereira said in Rule X extended to score ballots:

Yeah, that's the one that's SPAV + KP, right? I think that's still my preferred Thiele-based option.

Yes that is what it is called on this page. https://electowiki.org/wiki/Kotze-Pereira_transformation

There is no proper page for it. If you want to advocate for it you should make one

]]>I would not say that. Systems like SSS and MES are designed to be sequential. That may be the issue with RRV but SSS and MES do not have the same excuse. DSV is the sequential implementation of Thiele for score. Or at least I designed it to be a close to SPAV as I could.

When you gave the results for your example in the post a few above you said RRV, but do you mean that you actually used SDV? Edit - In any case I don't think RRV is the method worth calculating results for.

@toby-pereira said in Rule X extended to score ballots:

As an aside, regardless of what one thinks of Thiele methods in general, I do not consider RRV to be a good implementation of it.

You prefer Sequential Proportional Score Voting, correct?

Yeah, that's the one that's SPAV + KP, right? I think that's still my preferred Thiele-based option. But either that or SDV are likely superior to RRV.

]]>As far as I can tell, there is no particularly obvious way to extend the definitions to score while maintaining poly-time computability

Perhaps the Kotze-Pereira transformation

]]>As far as I can tell, there is no particularly obvious way to extend the definitions to score while maintaining poly-time computability (at least, not that I can theoretically motivate). That is why I chose to transform to approval with random thresholds.

I am computing maximin support as described in Theorem 2 of this paper https://arxiv.org/pdf/1609.05370.pdf

And stable priceability as defined in section 3 of this paper https://www.cs.toronto.edu/~nisarg/papers/priceability.pdf

also

Perhaps an Optimal (non sequential) variant of MES could be made where the rho is the same for all winners. This should minimize free riding like with SSQ but do it more cleanly

I would be surprised if this is always possible, but it's an interesting idea. I am assuming you mean to still choose the winners sequentially, given some uniform rho?

edit: at the very least, it will be possible when every candidate has at least one bullet voter... you can set rho to be very high 1/(minimum over all scores awarded). Then this is the greedy chamberlin-courant rule where a score > 0 is interpreted as an approval. It might still have the issue where not every winner gets a full quota.

]]>I wanted to see how these committees fared in terms of

a) the Maximin Support objective, equivalent to the max-Phragmen objective

b) stable priceability, which implies core

Do these need to be defined in terms of Approval? Can you give the formula you used for clarity?

@andy-dienes said in Rule X extended to score ballots:

The example as given is really a very edge case with tight numbers

As intended. It is only going to be such cases where they differ in results. @BTernaryTau Can you make a Ternary plot for MES. It would be interesting to see the differences between. SSS, MES, Allocated Score, RRV and STV.

@andy-dienes said in Rule X extended to score ballots:

I expect the committees these rules return would change a lot with a little noise added.

This was how I started the simulations from last time. I simulated the supporters as gaussian blobs that I put in the 2D plane. The default example in vote_sim.py is somewhat similar to this example.

@toby-pereira said in Rule X extended to score ballots:

It's weird that RRV has done that since its mechanism is just to maximise the "satisfaction" score for each voter.

Simulation have show that RRV gets higher total utility more often. I suspect this is just a weird example for RRV.

@toby-pereira said in Rule X extended to score ballots:

I presume then that this is to do with electing sequentially rather than something fundamental to RRV itself. And I would also presume that electing sequentially can throw out weird anomalies for any voting method, and I don't see any particular reason why any method should be more susceptible than any other method to this.

I would not say that. Systems like SSS and MES are designed to be sequential. That may be the issue with RRV but SSS and MES do not have the same excuse. DSV is the sequential implementation of Thiele for score. Or at least I designed it to be a close to SPAV as I could.

Perhaps an Optimal (non sequential) variant of MES could be made where the rho is the same for all winners. This should minimize free riding like with SSQ but do it more cleanly

@toby-pereira said in Rule X extended to score ballots:

As an aside, regardless of what one thinks of Thiele methods in general, I do not consider RRV to be a good implementation of it.

You prefer Sequential Proportional Score Voting, correct?

]]>On the other hand, RRV and STV choose winner sets where all voters are strictly worse off than under the SSS winner set,

It's weird that RRV has done that since its mechanism is just to maximise the "satisfaction" score for each voter. I presume then that this is to do with electing sequentially rather than something fundamental to RRV itself. And I would also presume that electing sequentially can throw out weird anomalies for any voting method, and I don't see any particular reason why any method should be more susceptible than any other method to this.

As an aside, regardless of what one thinks of Thiele methods in general, I do not consider RRV to be a good implementation of it.

]]>I wanted to see how these committees fared in terms of

a) the Maximin Support objective, equivalent to the max-Phragmen objective

b) stable priceability, which implies core

I transformed scores to approvals by having each voter choose a uniform random approval threshold in (0,1). It took me a little bit to formulate the linear programs, so there may be bugs, but my calculations give, averaged over 1000 trials,

AAABC: maximin support = 1 (quota), stable priceable probability = 0.4

ABBBB: maximin support = 0.75 (quota), stable priceable probability = 0

AAABB: maximin support = 0.84 (quota), stable priceable probability = 0

AAACC: maximin support = 0.96 (quota), stable priceable probability = 0.25

AABBC: maximin support = 1 (quota), stable priceable probability = 0.46

Important note: these are the values for the *committees* not for the selection rules that originally found them. The example as given is really a very edge case with tight numbers, so the randomness in approval thresholds helps smooth that out a little. I expect the committees these rules return would change a lot with a little noise added.

BTW, I think this might illustrate my heuristic objection to defining stability (and preferences over sets) as just pure linear utility. With the literal linear utility interpretation, ABBBB is stable and blocks both AABBC and AAACC. However, if we interpret the score 3/5 as a 60% probability of approval, suddenly ABBBB is *never* stable, and AABBC & AAACC both have non-trivial stability probability. Furthermore, if either (or both) 1) that score of 3 is lowered slightly to like 2.8 or 2) some small fraction of voters choose to bullet vote, all of a sudden ABBBB does not look particularly good either from a stability standpoint or utility.

anything satisfying priceability (which is a very intuitive criterion to me, implies PJR)

Can you (or somebody) make a priceability electowiki page?

]]>Also, on PJR, it's worth pointing out that Sainte-LaguĂ«/Webster can in some circumstances fail the lower quota rule, so presumably fails this criterion

True! And moreover, anything satisfying priceability (which is a very intuitive criterion to me, implies PJR) *must* be an extension of D'Hondt.

The example you have given for perfect representation is an example of how it is incompatible with Pareto efficiency. I definitely agree this is a mark against it.

I think we are really on the same page: we both agree that PJR is desirable and weak, just I view failures of PJR as more damning than you do. FWIW, I also prefer D'Hondt to St. Lague

]]>Voters (generally) have a better sense of a preference ranking than they do actual utility distributions over candidates

Agreed but they do have some sense and that helps. This is the same as adding noise the the system and it will largely average out

Voters' utilities (whether they acknowledge it or not) tend to decay somewhat geometrically over their preferences

I do not agree. This is what people say about money but not candidates

Even given the above two assumptions, voters nonetheless tend to report utilities in more linear way over their preferences

I think that even if it is flawed it would be good to be able to make this as an honest recommendation for how to vote.

]]>On the other hand, RRV and STV choose winner sets where all voters are strictly worse off than under the SSS winner set, so if we make the assumption* that the sum of the scores can be used to determine which overall committee the voter would approve, then could be interpreted as quite a bad example for RRV and STV.

* I'm not calling it an unreasonable assumption, but it is an assumption and so I'm stating it. Perhaps we could test it with surveys, although in my opinion the meaning of scores depends on the voting system to some extent, so it might not be easy.

Even aside from scores, and looking at full approvals, there are scenarios where a "Pareto dominated" result is arguably better. See the archive here.

]]>@toby-pereira I think this example actually shows my point. In this case, (with very high probability if the approval sets are truly uncorrelated) any two of the winners will satisfy EJR (and thus also PJR/JR), so it is not restrictive at all.

My point wasn't that it was restrictive, but that it seemed a bit weak, making requirements about just one voter (but you acknowledged that later in your post).

Unless I'm misunderstanding the setup, I'm not sure why you are saying the two 51% will necessarily be elected (although, in this case it does seem like the 'right' choice).

Well, the two 51% candidates should be elected under any reasonable method given the lack of any correlation. But in any case, the point is that it shows that approximately 1/8 of the electorate will be unrepresented despite being part of a Hare quota.

Edit: I should probably mention there is another intuitive criterion,

perfectrepresentation. This is when the voters can be exactly divided into quotas such that each quota gets a unanimous winner. Obviously, this is also not always possible, but more importantly it isincompatiblewith EJR. This is one reason maybe it's reasonable to consider EJR 'clumsy.' However, it is compatible with PJR. It seems to me that PJR is weak enough such that any noncompliance is likely indicative of deeper problems. In particular, optimization of the maxPhragmen metric implies PJR. Your 'squared load' metric I believe is equivalent to the varPhragmen objective function, which implies JR.

There are cases where it is arguably undesirable to have perfect representation. I added an example to the wiki page. But to copy and paste:

Consider the following election with two winners, where A, B, C and D are candidates, and the number of voters approving each candidate are as follows:

100 voters: A, B, C

100 voters: A, B, D

1 voter: C

1 voter: D

A method passing the perfect representation criterion must elect candidates C and D despite near universal support for candidates A and B. This could be seen as an argument against perfect representation as a useful criterion.

Also, on PJR, it's worth pointing out that Sainte-LaguĂ«/Webster can in some circumstances fail the lower quota rule, so presumably fails this criterion. See example on Warren Smith's site. And I would generally consider this to be a fair system.

]]>This contradicts you prior statement about how a voter will adjust their scoring to the system.

Hm, fair enough.

I think my mental model of voter preferences is something like:

- Voters (generally) have a better sense of a preference ranking than they do actual utility distributions over candidates
- Voters' utilities (whether they acknowledge it or not) tend to decay somewhat geometrically over their preferences
- Even given the above two assumptions, voters nonetheless tend to
*report*utilities in more linear way over their preferences

This third point led me to tacitly apply some superlinear transformation to the utilities when I am thinking about who should win qualitatively, but I see what you are saying that if voters wanted this superlinear transformation interpretation they would have just voted that way. It still would feel weird to me for C not to get even 1 winner, but I will try to quantify that in a more robust way.

]]>In the case of a method on that profile choosing AAABB or ABBBB, I would certainly feel 'cheated' if I were a Blue voter, and in subsequent elections I would probably be less willing to compromise.

This contradicts you prior statement about how a voter will adjust their scoring to the system.

Suppose the the voter is using a mental model to map utility, u, to score, s.

s = S(u)

What we want is for this function to obey Cauchy's functional equation such that.

S(u1 + u2) = S(u1) + S(u2)

This mental model is derived from how the system treats scores. In SSS and MES the scores are treated linearly so that such a model arises naturally. In RRV and Allocated Score this is not the case so they will have to adjust the S(u) function to compensate. Even though it is not clear how.

Having this additive property is nice since it means that if you like a candidate have as much as another then you should score them half as much. Simplicity is important.

Consider the Blue group comparing ['A1', 'B1', 'B2', 'B3', 'B4'] to ['A1', 'A2', 'A3', 'C1', 'C2']

The expressed the scores B=3 and C = 5

B + B + B + B =12

C + C = 10

B + B + B + B > C + C

S(uB) + S(uB) +S(uB) +S(uB) > S(uC) + S(uC)

S(uB +uB + uB + uB) > S(uC + uC)

uB +uB + uB + uB > uC + uC

Which proves they are happier with 4 Bs than 2 Cs. If they are not happier then they did not use that mental model to map utility. In this sense, SSS and MES can punish strategic voting.

]]>