Weight his approvals by

P*log(1/P)+(1-P)*log(1/(1-P))

where P is the fraction of candidates he approved.

Consider it and let me know what you think.

My reasoning is that this gives voters incentive not to bullet vote unless they really only want to support a single or a small number of candidates.

One tactic would be to risk supporting weakly supported candidates to inflate the weight of there true top preferences. This is potentially foolish though, since it will also risk greatly inflating the influence of minority groups. In the same sense, it encourages voters to consider acceptable candidates from those minority groups to support, I.e. hinging the risk by supporting minority candidates that are somewhat more worth the risk.

This tactic can also be made less viable by imposing an approval cutoff (say, V/C where V is the number of voters and C is the number of candidates, or something of that kind), and then restricting the statistic only to include candidates that passed the threshold.

]]>While I don't think it would be a good method in practice

The 2 most popular voting systems in practice are IRV and plurality. Anything is a good method in practice

]]>Instead of looking at the number of states for each voter/candidate being two (approved or not approved), if e.g. 1/3 of voters approve a candidate, we could see it as one state for approved and two for not approved. In such a case I think the highest entropy state would be for a voter to approve the mean number of candidates.

]]>Let each voter submit an approval ballot.

Weight his approvals by

P*log(1/P)+(1-P)*log(1/(1-P))

where P is the fraction of candidates he approved.

Consider it and let me know what you think.

What is the mathematics/motivation behind this particular formula? I don't think we've been given much to go on.

Edit - But anyway, it seems that basically you get more weight per approval if you approve more candidates, though I'm not sure where this formula comes from other than it's possibly something to do with entropy.

But obviously it's a bad idea. There's no reason to punish people who approve fewer candidates, and it encourages cloning.

]]>For example, suppose that there is a ballot such as:

10: A | B C D E F G

15: A B | C D E F G

12: B C D | A E F G

7: C D F | A B E G

1: E F | A B C D G

Then the raw approval counts ranked in descending order are

B[27] A[25] C[19] D[19] F[8] E[1] G[0]

Or as fractions, these are approximately

B[0.6] A[0.556] C[0.4222] D[0.4222] F[0.1778] E[0.0222] G[0.000]

A reasonable tail-end elbow detection method would detect E as the elbow point, and we could remove E and G from the election. Or, for example, a 1/7=0.14285... approval rate threshold (7 being the number of candidates) would accomplish the same.

This would leave candidates

B, A, C, D, F

Now we have ballots, with binary entropies H2 as

10: A | B C D F --> H2 = 0.721928...

15: A B | C D F --> H2 = 0.97095...

12: B C D | A F --> H2 = 0.97095...

7: C D F | A B --> H2 = 0.97095

1: F | A B C D --> H2 = 0.721928...

From these, we compute the final scores as

A[21.7835...] B[26.215666...] C[18.448...] D[18.448...] F[0.7219...]

and B (who was actually the original approval winner) still wins, and by a wider margin.

I do think there are problems with this method from the standpoint of majoritarianism. It seems plausible that a minority could obtain disproportionate voting power over a bullet majority.

If the ballots were, for example,

51: A | B C D E F G

25: G F | A B C D E

12: E D C | A B F G

12: B | A C D E F G

then I think we have a problem.

]]>I'm thinking more about similarities in the end result--we're trying to assign more weight to more informative ballots, which I think is a good idea in principle, but in practice we're left open to very easy manipulation by strategic nomination/voting for hopeless candidates.

]]>And actually, in the context of approval voting, dividing by variance achieves the opposite effect of this system: it actually encourages bullet voting or sparsity of approval. This system symmetrically discourages sparsity of approval and sparsity of disapproval.

]]>Most modifications to approval/score voting run up against similar problems: any modification to approval/score voting must give up either participation or independence of irrelevant alternatives, the two properties that make these systems so appealing.

(Which isn't to say they can't be useful at all; STAR accepts a very small violation of IIA in exchange for improving voter honesty.)

]]>I wrote some more of the motivation above. But one motivation is that they provide the voting system with more information (measured by the entropy statistic) than other candidates do.

Also the base of the logarithm doesnâ€™t matter, since it amounts to a uniform positive scaling.

]]>This weighting favors those who approve half the candidates over those who approve just one or all but one. What grounds are there for not weighting them equally?

]]>