The ideal situation would be to have voters score each set of candidates, e.g. "a committee with A, B, C has a score of 3; one with A, B, D has a score of 5, ...". Then we could maximize the sum of scores. However, that's completely impractical for voters, it's difficult to model utilities, and a method like this would be extremely vulnerable to strategic exaggeration.

So, in the proportional context, so far we've found it easier to just deal with pass/fail criteria rather than VSE. That's not to say VSE couldn't be extended to the multiwinner context, it's just that it's complicated and we don't know how yet.

]]>By the way, since PAV with infinite clones passes core (which it doesn't with a limited number of candidates), I presume the optimal version probably is properly proportional (passes perfect representation). I might update my paper with this in at some point.

I have updated the paper to mention the proportionality of Optimal PAV (with variable candidate weight allowed), which allows for a proper comparison with COWPEA - these two methods being the main candidates for a truly optimal cardinal PR method (practicalities aside).

]]>Methods that guarantee core stability are of interest to me (see this thread, which I linked to earlier) even if it's not my priority. From what I've read, I think it's still unproven that it's guaranteed that the core is non-empty. But if you use a stability measure (as suggested in the thread) rather than an all-or-nothing, it could be workable regardless.

BTW, we should probably distinguish different-sized cores—the possibility of an empty Hare core is unknown, but Droop cores can definitely be empty (as the Condorcet paradox proves). What I'm interested in is satisfying the Hare core with high probability and satisfying the anti-Droop core guarantee with certainty; i.e. the share of voters who would prefer some other committee is less than 1 / (seats - 1).

]]>I assume nothing about the distribution. I speak of the average s/v, over all possible numbers of quotas in an interval (as defined above).

The bias that I speak of is differing s/v averages over low & high intervals.

Nothing whatsoever to do with a distribution.

In this post on Election Methods, you wrote:

Here is what I mean by "bias". I claim that my meaning for bias is consistent with the usual understood meaning for bias::

For any two consecutive integers N and N+1, the interval between those two integers is "Interval N"

If it is equally likely to find a party with its final quotient anywhere in interval N, then determine the expected s/v for parties in interval N.

Compare that expected s/v for some small value of N, with the expected value of s/v for some large value of N.

If the latter expected s/v is greater than the former, when using a certain seat allocation method, then that allocation method is large-biased.

If the opposite is true, then the method is small-biased.

I have bolded and italicised part of your quote. It is an assumption about the distribution. You might think it's a fair assumption. But it is an assumption, something you've been denying. So I'm glad that's clarified.

[quote]

And while that might seem unrealistic, we can see the case of very small parties that never get enough votes to win a seat. A particular party might be due about 0.1 seats at every election but never win one under a particular method. Is that bias?

[/quote]Sure, but it’s not the kind that has always been meant when speaking of bias. Fractional-quota small parties were traditionally never really wanted in PR countries.

It seems we're changing the subject here. This is about a method being objectively unbiased. We are not talking about practicalities at all and what is wanted. So do you admit to bias in the "Bias-Free" method then?

[quote]

(Michael's method involves a 0^0 in the 0 to 1 seat range, so appears to break, so I'm not sure how it is supposed to handle this case.)

[\quote]No, it still works, though the usual formula doesn’t work. There’s a way to do the integral from that 0^0 point. It’s an exception that has to be separately integrated as a separate problem.

The answer to that problem is a rounding point equal to 1/e.

To clarify then, those parties consistently getting fewer votes than 1/e of a quota of votes are the subject of systematic bias, under the "Bias-Free" method.

[quote]

(after all Sainte-Laguë simply returns the most proportional result)…

[/quote]…according to the difference measure of s/v-variation. …which doesn’t make as much sense as ratio. But of course your preference is entirely your business. …but if you’re going to say that Professor Huntington was wrong, you’ll need more than an assertion. You’ll need to say where you think he was wrong. Help that mathematics professor out by explaining where he made his error.

…& as I‘be explained to you many times, if by “most proportional” you mean “ having least maximum variation in s/v, that’s an entirely different matter from bias, whose meaning I’ve already told you several times.

Well, I've discussed Huntington's paper in the previous post, so that's sorted now. And you agree that Sainte-Laguë magically gives less bias than Huntington-Hill despite being worse. I also explained in a previous post why minimising the variance of s/v measured arithmetically is the best measure. s/v adds to a set number (s in fact). It is, in essence, an arithmetic sample, not a geometric one. Using geometric variance breaks if a party has zero seats. If you had a sample that multiplied to a set number, then use the geometric variance. It would make most sense. (Edit - is you were looking at v/s instead of s/v it would make sense to use the harmonic mean and variance.)

[quote]

The only way to get rid of bias under any assumptions…

[\quote]BF has no bias.

As pointed out above, it only has no bias under certain assumptions. I could devise a voting distribution of voting behaviour (that could exist in a possible world) where it has either small or large party bias. The only way to eliminate any possibility of this would be to use a non-deterministic method. However, "Bias-Free" does have a small-party bias relative to Sainte-Laguë, which by the most sensible measure gives the most proportional result. This is itself a form of bias. But anyway, I'm repeating myself. I think we're probably done because you're not going to reply. But it's a shame. I think "Bias-Free" probably has some interesting theoretical properties, and it would be interesting to see them explained. But you asserted too much about it, and were unable to discuss it in a reasonable manner.

Well, I wanted to check this forum out, & there was talk about doing a lot of polling, which I consider very useful to demonstrate how the methods work.

But the amount of participation in the recent poll wasn’t very promising.

Because as I pointed out in one of the threads about it, it wasn't run very well.

]]>I prefer that. But it would be regarded as a complication. Anyway tradition wanted to discourage small parties. The option to vote for two parties would be rejected as a complication.

For a first PR proposal, just propose ordinary list-PR, like 2/3 of the world’s countries use.

Later we can propose the 2-party option.

]]>By the way, Satisfaction Approval Voting can only be described as semi-proportional. You're wasting part of your vote on candidates that aren't elected. It's like SNTV except that you can split your vote up. They both have similar problems to FPTP.

They might be easy to explain, but they're not worth explaining!

You're right, of course, but that's why I like to bring up SAV as an "obvious" system with an obvious flaw (spoilers). Then I explain how PAV/SPAV fix that flaw with a minor change--split a vote only after a candidate is elected, not before.

]]>A group I'm tangentially connected to has decided to switch from sequential proportional approval voting to cumulative voting (voters are given a number of points equal to the number of seats, and each voter can give any amount of their available points to one or more candidates). I'm concerned because I have heard that cumulative voting is susceptible to vote-splitting and bullet voting. Where can I learn more about these effects in cumulative voting?

The main problem with cumulative voting is what happens if they don't do bullet voting. Cumulative voting produces kind-of-proportional representation if voters are strategic and perfectly informed, because minority groups can coordinate to bullet-vote. However, too much honest voting by these groups can easily result in a bare majority sweeping all the seats; in other words, requires some very complicated coordination (especially in small elections).

I don't think there are any proportional methods simpler than SPAV except party-list representation, or possibly sequential Ebert (although I'd consider sequential Ebert about as simple as SPAV).

Have you asked what makes this group think of SPAV as "too complex?"

]]>I think the KP transformation is generally good because it gives maintains good criterion compliance when added to an approval method without adding nasty surprises. And it's simple. Essentially all of the debate between Allocated Score, Method of Equal Shares, TEA etc., are about what to do with these messy scores that are hard to deal with when you use a score method rather than an approval method. KP sorts that out simply with generally better criterion compliance. So the consideration of TEA versus those other methods then becomes irrelevant.

And as I mentioned in this thread, all the subtractive quota methods are really just weak approximations of Phragmén-based methods. So I'd use Phragmén + KP as the basic starting point of a method you have to do better than. If determinism isn't essential, you can do better still.

]]>Essentially, 300 "candidates" are elected, which gives the potential for a high degree of PR. According to the paper, Sequential Phragmén, Sequential PAV, Method of Equal Shares and Phragmms all behave fairly similarly. I don't think I was previously aware of Phragmms. I'll have to look into it and potentially update the wiki.

I would also be interested in seeing how COWPEA Lottery performs. It is non-deterministic, but with 300 to elect, this is likely not to affect the result significantly. It also gains the advantage of being much cheaper to calculate, as well as passing the "Holy Grail" criteria of PR, strong monotonicity, Independence of Irrelevant Ballots (IIB) and the Universally liked candidate criterion (ULC).

]]>A candidate-based system, with presumably smaller regions, would be preferable in my opinion, and there would be no need for a discussion about whether a minimum threshold was good or bad.

I personally prefer candidate based PR to party list in general, but I haven't really thought that through in a Israeli/Palestinian context. In terms of Israel they currently are at one extreme of the spectrum where voters have as little say over parties candidates as possible. I definitely think any change in the opposite direction would help, even just going from Closed list to Open List. That would allow every party to ideally veto any of their candidates and leaders parroting genocidal war crime type talking points without turning their back on the party platform entirely. This seems like an easy change to start with since it would probably be less controversial than other ideas I support.

So really, under a system of PR that works and a parliamentary system that works, a stable government would have to be around the centre of public opinion. Whether that is in fact the case in Israel, I'm not really in a position to say.

My very limited understanding of Israeli politics leads me to believe that Netanyahu has been wildly unpopular most of my life, but he maintains power because nobody has been able to successfully challenge him and hold the position. I see Israel as not stable, but stagnant. I'd love to see actual Israeli citizens chime in here with some lived experience on the ground.

]]>This paper discusses a lot of the approval proportionality criteria that have come up over the years. And there are a lot of them. For example, there's Justified Representation (JR), Fully Justified Representation (FJR), Extended Justified Representation (EJR), Proportional Justified Representation (PJR), Laminar Proportionality, Priceability, Stable Priceability, Perfect Representation (PR), Core Stability. And probably some I've missed.

That's a lot of proportionality criteria. But the question is whether we need that many and whether they're all useful. If I want to know if a particular approval method is "proportional", I don't want to have to check it against 10 different criteria and then weigh them all up.

Well actually, I don't really think any of them really capture the essence of proportionality very well. On page 56 in that paper, there's a chart showing which criteria imply which others. And as you can see, almost all of them imply lower quota. According to lower quota, under party voting, no party can receive less than the proportion of seats they are due, rounded down to the nearest integer. However, as can be seen here, Sainte-Laguë/Webster fail lower quota, and are seen by many as the most accurately proportional methods. I don't think proportionality criteria that imply lower quota are fit for purpose.

Of the remaining criteria, I think Justified Representation is too weak to be worth anything. Perfect Representation, however, is too strong, but I think it makes a good base for a criterion. According to the wiki:

if there is a possible election result where candidates could each be assigned an equal number of voters where each voter has approved their assigned candidate and no voter is left without a candidate, then for a method to pass the perfect representation criterion, such a result must be the actual result.

I would say the main reason it is too strong is that it is incompatible with strong monotonicity. Consider the following ballots:

x voters: A, B, C

x voters: A, B, D

1 voter: C

1 voter: D

With 2 to elect, a method passing Perfect Representation must elect CD regardless of the value of x. There could be almost unanimous support for both A and B, but CD (with half the votes each) would still be elected.

In my paper on the COWPEA method, I define Perfect Representation In the Limit (PRIL):

As the number of elected candidates increases, then for v voters, in the limit each voter should be able to be uniquely assigned to ¹⁄ᵥ of the representation, approved by them, as long as it is possible from the ballot profile."

As I explained in the paper:

The common thread among proportionality criteria is the notion that a faction that comprises a particular proportion of the electorate should be able to dictate the make-up of that same proportion of the elected body. But this can be subject to rounding and there can be disagreement as to what is reasonable when some sort of rounding is necessary. However, taken to its logical conclusions, each voter individually can be seen as a faction of ¹⁄ᵥ of the electorate for v voters.

I also say that any deterministic method should obey Perfect Representation when Candidates Equals Voters (PR-CEV):

For a deterministic approval method where a fixed number of candidates are elected, a stronger proportionality criterion is Perfect Representation when Candidates Equals Voters (PR-CEV): if the number of elected candidates is equal to the number of voters (v), then it must be possible for each voter to be assigned to a unique candidate that they approved, as long as it is possible from the ballot profile. This is because no compromise due to rounding is necessary at that point.

One other thing I explained about PRIL, in case it is considered too weak for any reason:

One potential downside is that it does not define anything about the route to Perfect Representation, other than that it must be reached in the limit as the number of candidates increases. However, in that respect it has similarities with Independence of Clones, which is a well-established criterion. Candidates are only considered clones if they are approved on exactly the same ballots (or ranked consecutively for ranked-ballot methods). We would also want a method passing Independence of Clones to behave in a sensible manner with near clones, but it is generally trusted that unless a method has been heavily contrived then it would do this. Similarly, one would expect the route to Perfect Representation in a method passing PRIL to be a smooth and sensible one unless a method is heavily contrived, and none of the methods considered in this paper are contrived in such a manner.

So this is why I consider PRIL to be the standard proportionality criterion for approval methods. Any deterministic method should also pass PR-CEV.

]]>Optimised PAV Lottery is another non-deterministic method. In this you work out the optimum amount of weight each candidate should have (by e.g. infinitely cloning all candidates and running an election with a very large number of seats), and then elect candidates probabilistically according to these weights. (Though you would have to work out the distribution again every time a candidate is elected). Unlike deterministic PAV, it is thought (though not known) to be proportional by passing the Perfect Representation In the Limit criterion.

I also think that sequential methods generally fail participation (for a suitable multi-winner definition), whereas optimal elect-all-at-once methods are computationally infeasible. However, I think non-deterministic sequential methods can get around this failure. Optimised PAV Lottery is computationally infeasible anyway, but COWPEA Lottery is easily runnable.

]]>For example, if 50% of winner's supporters voted for party A, then that party would get half of the district seat.

As long as every district winner receives more than 50% of the vote this would prevent overhang seats. ]]>

Here's a simple example to show proportionality. This example works the same no matter what the measure of support. Basically, all these measures are clones of STV but without vote splitting. The measures I'm talking about are score, smallest pairwise win, and strongest beatpath. Also, I should say that I don't really know if the measures would work for sorting.

Setup:

2 parties: A, B.

2 seats.

100 voters.

quota: 50 voters.

Round 1:

The votes are A 55, B 45.

The best quotas of 50 voters give A 50 and B 45.

A gets a seat. The quota of 50 A-voters are removed.

Round 2:

Now we count the remaining voters, A 5, B 45.

B gets a seat.

If you just have a fixed quota then the voters that get their candidates early can get a bad deal because they pay a whole quota, whereas later on, the might not be a candidate with a whole quota of votes and yet you have to elect one anyway, so the voters of this candidate get their candidate more "cheaply".

So you might then look for a quota that distributes the cost more evenly, and that's all Phragmén really does. It distributes the load or cost across the voters as evenly as it can.

I see subtractive fixed-quota methods as a cheap hack really.

]]>But just to be clear anyway - when I was talking about not wanting to elect the favourite-of-nobody candidates, I wasn't referring to stopping the election of such candidates in principle. But we've discussed certain specific scenarios where we all seem to agree that electing the candidates that are the favourite of nobody isn't the best thing to do. And I was discussing ways to elect the candidates we would want to in this situation.

]]>If A, B, C and D are parties fielding multiple candidates, then with the following ballots, what proportion of the seats should each party win?

250: AC

250: AD

250: BC

250: BD

2: C

2: D

COWPEA would say that A and B would each win just under 1/4, and C and D would each win just over 1/4.

However, Optimised PAV would elect C and D with half the weight each, and A and B would not get any weight. The multi-winner Pareto efficiency criterion assumes that a voter's satisfaction with a result can be measured purely by the number of elected candidates they have approved. And going by that, any result containing any A or B candidates is Pareto dominated by CD.

This can be seen as a two-dimensional voting space with an AB axis and a CD axis. PAV would ignore the AB axis. Arguably though COWPEA makes better use of the voting space.

That was an example where any proportions are allowed, but here is another where 2 candidates are to be elected.

150: AC

100: AD

140: BC

110: BD

1: C

1: D

Basically any deterministic would elect either AB or CD. CD Pareto dominates AB in this multi-winner sense as all 502 voters have an approved candidate under CD, compared with 500 under AB (no-one has two approved candidates). However, AB is more proportional. 250 have approved A and 250 have approved B. 291 have approved C and 211 D. So under the CD result, the 211 D voters would wield a disproportionate amount of power. So what do you think? Does that matter or is it all about number of approved candidates?

If you go for AB, then you are rejecting the multi-winner Pareto efficiency criterion, but also consistency as a by-product. This is because you can have a set of ballots where the C and D approvals are just swapped round. So it's 211 for C and 291 for D but otherwise the same. Then combining the two sets of ballots together you get:

250: AC

250: AD

250: BC

250: BD

2: C

2: D

If there are 2 to elect here, it must be CD. So if you went for AB in the other examples, then you are rejecting the consistency criterion in multi-winner elections.

]]>When there's more than one winner, what happens depends on how you interpret the scores. You could measure a voter's satisfaction by adding up the scores the voters have given to the elected candidates, but I think that might be unsatisfactory in a few ways. There's always debate about how to interpret scores and what they mean, and whether absolute numerical values should really be used in their raw form.

Instead, the scores could be used as layers of approval. This basically means that a voter's satisfaction with a candidate set is determined by the single highest score they've given to a candidate in the set, next best used as a tie-break, and so on. So for scores out of 5, a single 5 is better than multiple 4s etc.

This should keep it relatively simple. Also if candidates are elected sequentially, it should be simple enough to calculate the results.

I think this should be a decent enough method and I think I'd prefer it to things like Allocated Score and Sequentially Spent Score.

Obviously COWPEA Lottery using scores as layers of approval is God-tier in terms of criterion compliance, and very simple to implement, but it is non-deterministic, which might be too much for some people, so this method could be a good compromise.

Edit - You'd have to work out exactly how to measure the stability of a candidate set though. Let's say the first 2 candidates elected are AB. Then you need to test e.g. ABC, ABD, ABE etc. to find the 3rd candidate. But I think you might be able to test them against each individual candidate not in the set. So test ABC against, D, E, F etc. separately.

Edit 2 - You'd probably have to test each potential set against all the other subsets. So ABC would go against ABD, ABE etc., plus AD, AE, BD, BE, as well as D, E etc. Still not that many in the general scheme of things.

]]>And here's one called "Proportionality and the Limits of Welfarism" (comparing Phragmén and Thiele). I didn't actually catch the guy's name. It didn't sound like the names in the paper - Dominik Peters or Piotr Skowron (and it doesn't look like Piotr Skowron) - and the subtitles have it as Scott Anand, which is a name I'm not familiar with.

]]>COWPEA Lottery with layers of approval - Voters score or grade candidates. The actual values are irrelevant, but when a ballot is picked at random only the top layer of (relevant) candidates is looked at. A pro is that it gives voters more distinguishing power between candidates. The cons are that it becomes more complex and to vote optimally a voter would have to grade basically all of the candidates, which could be quite a lot of them.

This one could be used nicely with the 0 to 5 "star" ballot. So to clarify the process to elect a candidate - pick a ballot at random and eliminate all but the top or joint top rated candidates on that ballot. Pick another ballot and retain only the candidate(s) that are the highest rated among those still in contention. Continue until one candidate remains. Elect that candidate.

]]>