Here's a simple example to show proportionality. This example works the same no matter what the measure of support. Basically, all these measures are clones of STV but without vote splitting. The measures I'm talking about are score, smallest pairwise win, and strongest beatpath. Also, I should say that I don't really know if the measures would work for sorting.

Setup:

2 parties: A, B.

2 seats.

100 voters.

quota: 50 voters.

Round 1:

The votes are A 55, B 45.

The best quotas of 50 voters give A 50 and B 45.

A gets a seat. The quota of 50 A-voters are removed.

Round 2:

Now we count the remaining voters, A 5, B 45.

B gets a seat.

If you just have a fixed quota then the voters that get their candidates early can get a bad deal because they pay a whole quota, whereas later on, the might not be a candidate with a whole quota of votes and yet you have to elect one anyway, so the voters of this candidate get their candidate more "cheaply".

So you might then look for a quota that distributes the cost more evenly, and that's all Phragmén really does. It distributes the load or cost across the voters as evenly as it can.

I see subtractive fixed-quota methods as a cheap hack really.

]]>But just to be clear anyway - when I was talking about not wanting to elect the favourite-of-nobody candidates, I wasn't referring to stopping the election of such candidates in principle. But we've discussed certain specific scenarios where we all seem to agree that electing the candidates that are the favourite of nobody isn't the best thing to do. And I was discussing ways to elect the candidates we would want to in this situation.

]]>If A, B, C and D are parties fielding multiple candidates, then with the following ballots, what proportion of the seats should each party win?

250: AC

250: AD

250: BC

250: BD

2: C

2: D

COWPEA would say that A and B would each win just under 1/4, and C and D would each win just over 1/4.

However, Optimised PAV would elect C and D with half the weight each, and A and B would not get any weight. The multi-winner Pareto efficiency criterion assumes that a voter's satisfaction with a result can be measured purely by the number of elected candidates they have approved. And going by that, any result containing any A or B candidates is Pareto dominated by CD.

This can be seen as a two-dimensional voting space with an AB axis and a CD axis. PAV would ignore the AB axis. Arguably though COWPEA makes better use of the voting space.

That was an example where any proportions are allowed, but here is another where 2 candidates are to be elected.

150: AC

100: AD

140: BC

110: BD

1: C

1: D

Basically any deterministic would elect either AB or CD. CD Pareto dominates AB in this multi-winner sense as all 502 voters have an approved candidate under CD, compared with 500 under AB (no-one has two approved candidates). However, AB is more proportional. 250 have approved A and 250 have approved B. 291 have approved C and 211 D. So under the CD result, the 211 D voters would wield a disproportionate amount of power. So what do you think? Does that matter or is it all about number of approved candidates?

If you go for AB, then you are rejecting the multi-winner Pareto efficiency criterion, but also consistency as a by-product. This is because you can have a set of ballots where the C and D approvals are just swapped round. So it's 211 for C and 291 for D but otherwise the same. Then combining the two sets of ballots together you get:

250: AC

250: AD

250: BC

250: BD

2: C

2: D

If there are 2 to elect here, it must be CD. So if you went for AB in the other examples, then you are rejecting the consistency criterion in multi-winner elections.

]]>When there's more than one winner, what happens depends on how you interpret the scores. You could measure a voter's satisfaction by adding up the scores the voters have given to the elected candidates, but I think that might be unsatisfactory in a few ways. There's always debate about how to interpret scores and what they mean, and whether absolute numerical values should really be used in their raw form.

Instead, the scores could be used as layers of approval. This basically means that a voter's satisfaction with a candidate set is determined by the single highest score they've given to a candidate in the set, next best used as a tie-break, and so on. So for scores out of 5, a single 5 is better than multiple 4s etc.

This should keep it relatively simple. Also if candidates are elected sequentially, it should be simple enough to calculate the results.

I think this should be a decent enough method and I think I'd prefer it to things like Allocated Score and Sequentially Spent Score.

Obviously COWPEA Lottery using scores as layers of approval is God-tier in terms of criterion compliance, and very simple to implement, but it is non-deterministic, which might be too much for some people, so this method could be a good compromise.

Edit - You'd have to work out exactly how to measure the stability of a candidate set though. Let's say the first 2 candidates elected are AB. Then you need to test e.g. ABC, ABD, ABE etc. to find the 3rd candidate. But I think you might be able to test them against each individual candidate not in the set. So test ABC against, D, E, F etc. separately.

Edit 2 - You'd probably have to test each potential set against all the other subsets. So ABC would go against ABD, ABE etc., plus AD, AE, BD, BE, as well as D, E etc. Still not that many in the general scheme of things.

]]>And here's one called "Proportionality and the Limits of Welfarism" (comparing Phragmén and Thiele). I didn't actually catch the guy's name. It didn't sound like the names in the paper - Dominik Peters or Piotr Skowron (and it doesn't look like Piotr Skowron) - and the subtitles have it as Scott Anand, which is a name I'm not familiar with.

]]>COWPEA Lottery with layers of approval - Voters score or grade candidates. The actual values are irrelevant, but when a ballot is picked at random only the top layer of (relevant) candidates is looked at. A pro is that it gives voters more distinguishing power between candidates. The cons are that it becomes more complex and to vote optimally a voter would have to grade basically all of the candidates, which could be quite a lot of them.

This one could be used nicely with the 0 to 5 "star" ballot. So to clarify the process to elect a candidate - pick a ballot at random and eliminate all but the top or joint top rated candidates on that ballot. Pick another ballot and retain only the candidate(s) that are the highest rated among those still in contention. Continue until one candidate remains. Elect that candidate.

]]>Anyway, not that it will do anything, but I Tweeted at them today with my suggestion here.

By the way, I definitely think that Sequential proportional score voting (also known as SPAV + KP) is the best method for this. When you're electing an essentially unlimited number of items in a sequential manner, you do not want to be messing about with quotas etc. A Thiele method is ideal for this, and as it's for scores (well stars), I consider this to be the best version as it passes multiplicative and additive scale invariance, which other methods do not. IMDb uses a 1 to 10 star scale, and if 1 was subtracted from every score to make it 0 to 9, then the results under Sequential proportional score voting would be identical.

I don't generally use Twitter, but feel free to engage with the Tweet to like retweet etc. so it can gain more traction (if you agree with it obviously).

]]>So this is my attempt to apply that kind of procedure to political parties and representatives. Forgive my lack of education regarding how political parties work:

- There should be a government body that registers political parties and demands the compliance of all political parties to its procedures in order for them to acquire seats for representation;
- (Eyebrow raising, but you might see why...) Every voter must register as a member of exactly one political party in order to cast a ballot (?);
- Each political party A is initially reserved a number of seats in proportion to the number of voters with membership in A; the fraction of seats reserved for A is P(A). however
- For each pair of political parties A and B (where possibly B=A), a fraction of seats totaling P(A~B):=P(A)P(B) will be reserved for candidates
*nominated*by A, and*elected*by B; these seats will be called*ambassador*seats from A to B when B is different from A, and otherwise will be called the*main platform*seats for A; - Let there be a
*support quota*Q(A~B) for the number of votes needed to elect ambassadors from A to B, and call P(A~B) the*ambassador quota*of party A for B. If E(A~B) is the fraction of filled A-to-B ambassador seats (as a fraction of all seats), I.e. nominees from A who are actually elected by members of B, then A will only be allowed to elect P(A~A)*min{min{E(A~B)/P(A~B), E(B~A)/P(B~A)}: B not equal to A} of its own nominees. That is, the proportion of reserved main-platform seats that A will be allowed to fill is the least fraction of reserved ambassador seats it fills in relation to every other party, including both the ambassadors from A to other parties, and the ambassadors from other parties to A.

This procedure forces parties to also nominate candidates that compromise between different party platforms in order to obtain seats for any main-platform representatives. If a party fails to meet its quota for interparty compromises, it will lose representation. On the flip side, this set up will also establish high incentives for other parties to compromise with them in order to secure their own main-platform representation. In total, this system would give parties high incentives to compromise with each other and find candidates in the middle ground, which will serve as intermediaries between their main platforms.

Basically, here the outlines indicate seats open to be filled by candidates who are nominated by the corresponding party, and the fill color indicates seats open for election by the corresponding party:

Seats with outlines and fills of non-matching color are ambassador seats, and seats with matching outline and color are main platform seats. In terms of party A, by failing to nominate sufficiently-many candidates who would meet the support quota Q(A~B) to become elected as ambassadors from A to B, or by failing to elect enough ambassadors from B to A, party A restricts its own main platform representation and that of B simultaneously. By symmetry the reciprocal relationship holds from B to A. Therefore all parties are entangled in a dilemma: to secure main-platform representation, parties must nominate a proportional number of candidates who are acceptable enough to other parties to be elected as ambassadors.

To see that all needed seats are filled in the case of a stalemate, where parties refuse to nominate acceptable candidates to other parties and/or refuse to elect ambassadors, the election can be redone with the proportions being recalculated according to the party seats that were actually filled.

The support quotas collectively serve as a non-compensatory threshold to indicate sufficient levels of inter-party compromise. Ordinary PR is identical to PR with ambassador quotas but with all support quotas set to zero, whereby there is no incentive to nominate compromise candidates.

The purpose of this kind of procedure is twofold: firstly, it should significantly enhance the cognitive diversity of representatives, and secondly, it should significantly strengthen more moderate platforms (namely those of the ambassadors) that can serve as intermediaries for compromises between the main platforms of parties. Every party A has a natural “smooth route” from its main platform to the main platform of every other party: The main platform of A should naturally be in communication with ambassadors from A to B, who should naturally communicate with ambassadors from B to A, who should naturally communicate with the main platform of B.

Also, this procedure gives small parties significant bargaining power in securing representation. Large parties will have much more representation to lose than the small parties that are able to secure seats if the small parties refuse to elect any ambassadors, so rationally speaking, large parties should naturally concede to nominating sufficiently many potential ambassadors whose platforms are closer to the main platforms of those small parties. The same rationale holds for the potential ambassadors nominated by small parties, who also should tend to have platforms closer to the main platform of the small party.

Finally, this system creates significant incentives for voters to learn about the platforms of candidates from other parties who stand to reserve seats for representatives.

]]>- Each voter i casts a score ballot S_i.
- Suppose S_i ranks
*n*candidates top. Create a matrix M_i and vector**w**_i where M_i[a, :] = S_i / n and**w**_i[a] = 1/n if*a*is ranked top, else both are 0. (Effectively, we're combining all the votes that score each candidate top into one basket. If multiple candidates are scored top, - Let M be the sum over all voters
*i*of M_i and**w**the sum of all**w**_i. (M needs to be normalized by**w**). - Let X be formed by dividing row
*j*of M by**w**[j]. Run RRV (or its optimal form) where the rows of X are "ballots" and the elements of**w**are the initial ballot weights.

Example: The ballot (A=1, B=2/3, C=1/3, D=0) becomes

```
M_i =
1 2/3 1/3 0
0 0 0 0
0 0 0 0
0 0 0 0
w_i = [1 0 0 0]^T
```

while (A=1/2, B=1, C=0, D=1) becomes

```
M_i =
0 0 0 0
1/4 1/2 0 1/2
0 0 0 0
1/4 1/2 0 1/2
w_i = [0 1/2 0 1/2]^T
```

Of course, I don't like the discontinuity involved (in splitting between candidates that are scored "top") so I would change the calculation of M to be proportional to the elements of the voter's ballot (so that ends up looking like M_i[:, :] = S_i S_i^T and **w** = S_i/sum(S_i)). The ballot (A=1, B=2/3, C=1/3, D=0) then becomes

```
M_i =
1 2/3 1/3 0
2/3 4/9 2/9 0
1/3 2/9 1/9 0
0 0 0 0
w_i = [1/2 1/3 1/6 0]^T
```

I call this **soft Simmons PR** (by analogy to "softmax" which was partially an inspiration).

The nice thing is that since there are only *c* distinct "ballots" in the final step, there might be some kind of efficient special-case algorithm to determine the optimal-PAV (or its score counterpart) winner slate, although I am not completely sure of this.

(Can I not use LaTeX here?)

]]>There are rules which are committee monotone which will elect AC (e.g. SAV)

That means that SAV elects C for k=1, which is definitely illogical. As for MES, part of that might be because it's a little unclear for which values of x the following generalization should elect AC instead of BC:

A (1-x)/2

AB x/2 - epsilon

BC x/2 + epsilon

C (1-x)/2

All-at-once d'Hondt PAV elects AC for x < 2/3. I think my argument is more persuasive if x is closer to (but still more than) 1/2, and epsilon is small compared to x-1/2.

]]>If a tie in the legislature genuinely reflects a near deadlock of public opinion, 49.9% of the public getting their way over 50.1% is hardly a travesty.

I suppose, but I think that there could be interesting effects (some potentially negative, maybe) to think about in the context of the elections themselves if such a legislative tiebreaking rule were implemented. For example, in a 2-seat cardinal PR election, if the candidates are aware that the seat-winner who gets more score points will be given ultimate power in the 2-seat legislature, that would be an incentive to try to be more of a consensus candidate (to whatever extent possible without upsetting one's "base").

]]>But more on topic I think you are making 2 assumptions you do not know you are making. You assume at all existing parties are independent and uncorrelated. More mathematically, the assumption is that the parties form an orthogonal basis for a political space. This is quite clearly false and I hope I do not need to explain why.

Secondly, by having all of a voters endorsement put towards one party you are preventing exact PR. In fact I would think that in general this effect is larger than the effect you are brining up here. To do this properly you would need to know the vector representation of each voter in the space defined by the parties. Having that you could calculated exact PR.

]]>Do you know if the quota updates between rounds?

I do not believe in this method that was the intent but perhaps there is something more slick that could be done incorporating the ideas of Sequentially Shrinking Quota.

The RRV reweighting scheme doesn't decrease the total number of votes by a quota each time, so it could change.

The RRV system has more issues than that as is pointed out in the Single distributed vote page. So the potential system here is a way to combine the changing quota idea in Sequentially Shrinking Quota, the balanced reweighting from Single distributed vote and the selection from Sequential Monroe. It may not be possible but if I find some time it might be straight forward. The issue with Sequentially Shrinking Quota was always the calculation of quota size but reweighting gives some insight there.

My initial reaction was that this system would discourage the use of the middle of the range, because if you give some points to a candidate but are not in the quota, then you pay for that candidate but don't contribute to their election.

As I pointed out in my first post a mismatch between selection and exhaustion/allocation is likely to cause strategic vulnerabilities. The vote management things you highlight could sink the system. However, there are many strategies people can use and it remains unclear (at least to me) how their effectiveness balances out. A weak exploit which is easy and obvious may be better or worse than a strong but rare exploit. These are things we cannot really simulate. Things like vote management require a group effort and that relies on the organizational abilities and cohesiveness of factions. I suspect the worst exploits are those which are obvious to a single voter and unlikely to back fire.

Independent of if it is viable in the real world it would be a great extension of the field to have a system like the one I propose. It is good to compare and contrast.

]]>If every voter has approved at least as many candidates in set X as set Y, and at least one voter has approved more candidates in set X than set Y, then set Y should not be the winning set.

But I then countered in this thread that it might not actually be such a desirable property.

However, it does leave me wondering how optimal candidate Thiele voting would behave. Thiele's biggest failing is that it fails ULC, but with different candidate weighting allowed, a universally liked candidate would get all the power, so the problem might go away. But there might be residual problems when a candidate isn't universally liked, but is by certain factions, or if factions mix in a certain way. So I'm not sure if the problem would remain. Thiele would automatically pass the Pareto criterion above, though as said, it's debatable how desirable it is.

I think in general it would lead to more majoritarian but less purely proportional results than COWPEA.

]]>But the main problem is that it fails scale invariance. Well it passes in a multiplicative way as it is defined on the wiki, but not if you add to the scores.

For example, if everyone scores 1 to 10 instead of 0 to 9 (so just adds 1 to every score), you can get a different result. KP + SPAV (also known as Sequential Proportional Score Voting or SPSV) passes this. I know it might seem unsatisfactory to "split" the voter with KP, but in terms of passing criteria, it seems to do the job.

]]>