Mathematical Paradigm of Electoral Consent

@cfrank You may be interested in this notion of "generalized Condorcet winners" via "Borda dominance." A paper on the topic is here https://www.jstor.org/stable/43662517 (let me know if you don't have access and I will get PDF)
Your proposal, and in particular "SP dominance" reminds me a bit of this idea.

@cfrank said in Mathematical Paradigm of Electoral Consent:
The pamphlet is not complete and I intend to provide citations where appropriate, but I’m not sure what citations would be needed.
Your historical discussion on pages 24, for one thing. The definition of STAR voting also probably deserves a citation. Your discussion of the relevance of these ideas to politics also might merit some citations.
Pages 67:
The following stipulation is adopted: That if one intends to utilize probability
measures to establish a decision algorithm for an OSS in a democracy, any
utilized probability measure imposed on the electorate should be uniform.In what way would you impose a probability measure on the electorate? What would you do with it?
In your definition of SPconsent ceiling, did you mean R >= S instead of R > S?
I assume that R needs to be an element of S. Using R > S can lead to some consequences that I am not sure if you intended. For example, in an Approval election in which a candidate gets 65% approval, (0, 0.65) is part of the SPconsent ceiling, as is (0, r) for r in [0.65, 1].Pages 78: This seems to be a lot of loose threads. I think you need to find a point and stick to what relates (although not necessarily supports, discussing contrary perspectives is fine). Things like the role of decision algorithms in machine learning probably should go in its own discussion at the end that could discuss alternate applications of these ideas.

@marylander In a similar vein to "loose threads," I think the connection to compression algorithms is supported only by the fact that the set of winners is smaller than the set of candidates. There might be a stronger philosophical argument to relate proportional representation committees to compression algorithms, but for single winner schemes I cannot see it.

@marylander that makes a lot of sense, I can definitely find good citations for all of those things. Thank you.
In terms of the probability measure, the electorate as I have defined it is a finite set of objects called voters, and any finite set can be equipped with a probability measure to turn it into a probability space. In terms of how it is used, that depends on the decision algorithm. It isn't easy to formalize the concept because decision procedures can get really wild, and I would have to restrict the scope to a specific kind of decision algorithm to say anything much more meaningful. I tried to connect it with Lewis Carroll's desiderata but it isn't formal. It might just be unnecessary.
For the SP consent ceilings you are correct that my meaning has an anomaly, you are also absolutely correct about the intended meaning.
Thank you for your input, I have these concepts floating around in my head so trying to put them down on paper and running them by other people who are knowledgeable and have a fresh perspective is very helpful.

@brozai I think it depends on your perspective. Just as a rough example, one could create a formal model of voters and candidates as having "investment" distributed over "interests," i.e. a set of "interests" and letting each voter be essentially a probability distribution over those interests. Then one could take the sum total of those interest distributions and create an "electoral interest" distribution.
If each candidate is also a probability distribution over those interests, choosing a candidate can be seen as more or less projecting/compressing the electoral distribution into the set of distributions determined by the candidate pool. With this conception a voting system functions exactly as a compression algorithm.
Real life is more complicated than that but I hope that illustrates my thinking bettera candidate's platform can be seen as a (high quality or lousy) compression of electoral interests.

@cfrank You seem very intent on reformulating all the language, definitions, and algorithms used for voting in terms of probability measures and random variables. Out of curiosity, and I hope this doesn't come off as confrontational, why is that?
It is not less rigorous to just use the conventional definitions used in social choice theory where ballots are weak orders and so on. Similarly, I do not see the use in considering generalizations of the voter / candidate sets to be of arbitrary (infinite) cardinality.
It could possibly be of interest to study limiting behavior of voting rules as the number of candidates or voters growfor example, studying the probability of a tie or of a Condorcet cycle in the limit of these quantitiesbut I don't think that's what's happening here.

@brozai said in Mathematical Paradigm of Electoral Consent:
You seem very intent on reformulating all the language, definitions, and algorithms used for voting in terms of probability measures and random variables.
Hopefully this won't come off as a pileon, but this was something I was confused about as well (regarding a previous paper), it seems way more complex than it needs to be. I compared it to saying that a "randomly selected point in a glass had a 75% chance of being occupied by liquid," as opposed to simply saying that "the glass is 75% full." It strikes me as a very roundabout way of expressing a simple concept.

@brozai I am not trying to reformulate all of the language, definitions, and algorithms for voting in terms of probability measures and random variables. I am proposing a specific paradigm that happens to include an intimate incorporation of probability theory, along with a few specific voting systems that fit nicely within that paradigm. Connecting with a larger mathematical framework allows use of the powerful tools that belong to that framework, and probability theory seems appropriate to me.
I agree that it is not less rigorous, it's equally rigorous. It's just the way I think and express myself, probably because my background is in pure math. If you see a more apt way to describe the concepts I am proposing I would definitely like to hear that. I want to find a higher level of abstraction that can maybe unify some of the things we're looking at in voting theory, if I could find a good theoretical foothold I would be using category theory, but I don't want to go too far off into abstract nonsense that nobody wants to look into.
I don't actually agree that it is very much more complex than it needs to be. As I mentioned in the introduction of the pamphlet, connecting voting theory with probability theory is nothing new. Condorcet was one of the pioneers of voting theory and his Jury Theorem is a direct application of probability theory to voting theory. As another example, Nash's equilibrium theorem is a direct application of probability theory and topology to game theory.
The use of generalization is just that it is more general, and might be more amenable to application in other areas. I mentioned machine learning as one such area. And I want to point out that ordinal scores are different from weak orderings.

@cfrank I have a pure math background as well, so trust me I'm no stranger to painful detail & abstraction
I agree that there can be interesting connections to probability theory. The Jury Theorem is a great example.
In this case, I am not convinced it is necessary or instructive to introduce any additional tools or definitions beyond what is already commonplace in voting algorithms.
In the words of Dijkstra
The purpose of abstraction is not to be vague, but to create a new semantic level in which one can be absolutely precise
And I believe we already have the tools to be absolutely precise with regular old ranked ballots over finitely many candidates. Sticking with conventional nomenclature and concepts will help people understand your proposal much more easily, and will also help contextualize it and compare to other methods.
In particular, I believe SP dominance is in fact equivalent to Borda dominance, and I believe your weighted PFPP scheme is in fact equivalent (edit: not equivalent since we have to allow skipped rankings, but very closely related) to a positional scoring rule, but these connections are very hard to see underneath all the new definitions and unnecessary framework.
It's possible I am misinterpreting something and that the equivalence I suggest above is invalid, but if this is the case I would find it very helpful to my understanding if you could provide an example where they differ !

@brozai I want to look into that Borda dominance scheme and see if it is different from my proposal. (EDIT: I totally do have access)
PFPP may be equivalent to a positional scoring rule at each election, but the prescription of the particular scoring rule and how it is allowed to change from one election to the next according to informative distributions is what makes PFPP different. For example, a thought I had earlier today was actually that if the distributions are allowed to update, then as fewer people overuse the higher score values, they become more potent when they are used. This can give voters an even stronger incentive against strategic bullet voting, since it will weaken their vote in the future when they may actually feel strongly about a candidate.
I'll read the PDF paper you linked and see if they coincide, if they do then I'll be happy because that means probably more analysis has been done on this system! Otherwise I'll try to illustrate points where I find that they differ. I think already the fact that the winner is called "generalized Condorcet" points to something different, since the methods I am proposing (at least on the surface, I could be wrong) have nothing to do with the Condorcet criterion.

PFPP may be equivalent to a positional scoring rule at each election, but the prescription of the particular scoring rule and how it is allowed to change from one election to the next according to informative distributions is what makes PFPP different
Ok, fair enough, but I can't comment on whether or not this is a good thing. It certainly would be a radical reform to current elections.
I sent the paper to the gmail attached to the google drive you shared, so let me know if you don't receive it. Unfortunately they don't do a ton of analysis besides introduce the concept of positional scoring dominance and then prove what types of dominance are actually constructible (what they call "Condorcet words"), but it is relevant I think nonetheless. In the language of this paper I think SP dominance corresponds to the kCondorcet winner in the case where k = n.

@brozai on second thought, PFPP and its weighted variants definitely are not positional score systems even without accounting for the potentially changing distributions. It is only a positional score system if the distributions used for the random SP ceiling heights are uniform. I think the explanation of the system will become clearer with visuals.
In any case it could very well be that being a kCondorcet winner when k=n is equivalent to being a unique candidate that is not SP dominated. I’m not sure! Still working through the paper.
I tried to give an explanation of the unweighted PFPP system a while back through a video. It may help, if you were interested, but I understand if it’s not your cup of tea! This is the video:
https://app.vmaker.com/record/SGSydGYcwOW9Vf6d
It’s like 20 minutes… 10 if you do x2, potentially less if you skip around.
On a related (maybe controversial?) note I take some issue with the Condorcet criterion. I also have noticed that ElectoWiki doesn’t seem to be very objective about it. While a Condorcet winner has the majority support of the electorate over any other candidate in a pairwise faceoff, the majority groups that support the winner from different faceoffs can differ from each other dramatically.
In other words, I would say that there is no guaranteed stable locus of electoral consent for a Condorcet winner—it is rather like a stitching together of victories in various unrelated and somewhat gamified competitions, and to me this makes the Condorcet paradox less of a paradox. In line with the concluding remarks of the paper I think it’s not at all obvious or necessarily correct that the Condorcet winner is the ideal choice even when one exists.

@brozai I also wanted to address your point about majority rule. If you are referring to May's Theorem, I think it's important to consider the scope of the proof. The theorem is proved assuming that voters can indicate one of only three options, 1, 0, or +1. The formal properties don't fully make sense beyond that scope.

@cfrank well, this is true, but I think the monotonicity condition means that, with strategic agents, it has a natural extension to score ballots

@brozai I would need to see it formally. I can see what you mean about strategic agents, but I don’t see any extensions mentioned on Wikipedia but a similar statement (whatever that means) for approval voting (fully strategic score voting), and then some for other firstpreference aggregators which would have us using plurality voting. I personally doubt a useful extension to generic score voting exists, because of the nature of the preferential ambiguity between a strong majoritarian assertion and a weaker but broader consensus.

@cfrank A function monotonic on [0,1]^n > {0,1} must also be monotonic on {0,1}^n > {0,1}
Strategic agents will only submit values in {0,1} since by monotonicity any other value makes the chance of electing their favorite strictly lower.
Therefore the method must coincide with majority on {0,1}^n ballots, which is the entire domain of ballots from strategic agents

@brozai this reasoning implies that a voter only considers the outcomes “my favorite wins” and “my favorite does not win.” A voter can also consider outcomes like “my favorite doesn’t win, but my second favorite does,” or “my favorite doesn’t win, but neither does my least favorite.”
Bullet voting is maybe a good strategy for an econ voter who is not at all risk averse and are allornothing for their top choice, but real people are risk averse with stratified preferences and will try to establish at least a Plan B in case Plan A doesn’t work out.
Increasing the probability that one’s favorite wins as much as possible locally does not necessarily increase the probability of a more global “acceptable outcome” as much as possible, which is what many real people try to accomplish, depending on their definition of what constitutes an acceptable outcome.
This does somewhat seem to lead to approval voting, which I don’t think is a bad system actually. I’d have to learn more about it. Obviously it has its own problems, but at least burial is quite minor..

@cfrank I am specifically referring to the case of 2 candidates! In this case, bullet voting will always be optimal

@brozai I see! Yes that makes more sense. I still am not sure, because there may be a minority with a strong preference against the weakly held majority preference, and this just isn't taken into account. Obviously that is the whole issue with majoritarianism. The more I consider it the more and more keen I am on multiwinner proportional representation. In that case I feel like something like quadratic voting might potentially do a very good job identifying candidates with interests that represent those of the electorate.

@cfrank If you are interested in proportional representation I would read the following paper which gives a good overview of PR schemes for approval ballots: https://arxiv.org/pdf/2007.01795.pdf
Quadratic voting is a somewhat poor quality method (imo) unfortunately.