Mathematical Paradigm of Electoral Consent
Hello. I've been working on writing up a document describing a mathematical paradigm for measuring and analyzing consent in the context of certain voting systems. This is the current version, I wanted to post it here to see if anybody had any criticism or commentary. Food for thought, not yet complete, needs work and experimental data. But here is the link:
Isn't the "frontier of the (S,P)-consent" just the CDF of the score distribution of a candidate? Why all the roundabout definitions---seems a bit reinventing the wheel
@brozai thank you for reading and for your thoughtful question!
To answer, it’s actually more like 1- the CDF of the score distribution of the candidate, but only sort of. Depending, it’s not quite that simple, because the scores are ordinal in nature and are not necessarily real numbers, for example. You can’t add two abstract ordinal scores together, and I definitely didn’t want to give the impression that you could. I should probably be more explicit about that, so I really appreciate your comment.
On the flip side, 1- the SP frontier doesn’t really even have the properties of a normal real-valued CDF, because a real-valued CDF allows computation of the “expected score” if the scores are real numbers. But the scores aren’t real numbers, they are just abstract ordinal scores. There isn’t really even such a thing as the “area underneath” the SP frontier. That’s what causes the need for different measures, or different non-measure methods like the SP dominance ratios.
You could embed the SP frontier into the plane in a way that corresponds with any number of real-valued CDFs, but the SP frontier itself exists as a subset of the more abstract “SP space.” I hope that clarifies a bit!
Secondly, I’m not sure how the definitions are roundabout per se—they are just mathematically precise. There are two reasons for this:
(1) I’m trying to be very formal so that it’s very clear what exactly the system is that I’m talking about. The fact that you reasoned out this connection to CDFs is encouraging to me in that regard. Score distribution CDFs were my original consideration, and a good amount of time and deeper thinking led me to this work-in-progress SP consent paradigm.
(2) Yes one could jump straight to the 1-CDF construct, but I’m demonstrating how that construct is built up from considerations of the model of consent being defined, and how it relates to the construct of production possibility frontiers in economics. If somebody pulls the 1-CDF out of thin air, it just raises a bunch of questions, such as: why?
If nobody cares about whether the algorithm is philosophically and logically consistent with a specific paradigm, then you’re right, it’s just a bunch of words, and what matters is the system itself, which needs to be taken on its own scientific merit. That is somewhat the case regardless, but usually a good solution to a problem that gets accepted scientifically already has compelling rationale behind it well before experiments are done to test its practical utility.
There’s a concept in decision theory called “strategic optimization.” This is where rather than looking to solve a problem in a results-oriented fashion, one tries to develop a strategy from first principles. The reason is that sometimes even a perfect strategy can lead to failure, and a poor strategy can lead to success, so results can actually convolute strategy building. Confirmation bias is an example. So what I’m trying to do is construct a system from first principles, and trying to be very clear about where arbitrary decisions are being made and what motivates them when they occur.
@cfrank I can see you've thought quite a lot about the philosophy and intuition behind this proposal.
Have you identified any particular axiomatic failings or strategic vulnerabilities to other methods that this mitigates?
@cfrank It says I don't have access.
Would it be possible to just put its content into a post? Seems more in the spirit of the forum than linking externally, unless there is actually something in there that can't be in a message board post (such as an interactive app or the like).
@rob I think it’s much easier to read, more organized and presentable as a LaTeX pdf, and it’s easier to format equations. I intend to put visual diagrams as well but for now it’s just text and math. You can request access if you like, sorry, I thought I put the mode on “anybody who has the link” but apparently I failed to do that, I’ll try to change that. Although as the author I do sort of like being able to see who has direct access to the file.
@cfrank Fair enough. But if you ever do decide to post the content on the forum, I will likely read it and offer my thoughts.
@brozai thanks, I’ve certainly tried to do a lot of thinking, but it’s been mostly done alone so I’m trying to check myself and get some criticism and outside perspective.
My intention isn’t necessarily to satisfy particular pre-existing criteria, but to provide an alternative unifying paradigm for analysis. For example based on my own experiments, the weighted PFPP algorithm tends to align very often (but not always) with STAR. However STAR is not designed to optimize statistical measures regarding SP frontiers. I think what constitutes a failure (in the social sense, not the axiomatic sense of simply not satisfying certain formal properties) depends on what paradigm you subscribe to. Maybe apparently contrary to my “first-principles” talk, I’m of the belief that each voting system should be taken on its own merits and evaluated based on results rather than criteria.
For example, weighted PFPP does not satisfy the majority criterion, but that’s actually sort of the point. I don’t believe the majority criterion is a good thing as compared with building a broader/more inclusive consensus of the electorate.
Any SP efficient OSS should satisfy all of the typically enjoyed properties of any cardinal score
system. (Except perhaps “Frohnmeyer balance,” but as I have demonstrated elsewhere and intend to organize in this paper, that criterion is impotent).
Weighted PFPP with updating or relevant/informative probability distributions generally mitigates bullet voting and emphasizes the power of broad consensus over majoritarian strategy. Consistently utilized strategies will become noise as the system updates its distributions or as the distribution takes account of the frequency of certain types of candidate score profiles.
I wonder if you have any specific axioms in mind. The main goal is to emphasize broad consent over majoritarianism without eliminating the expressiveness of ballots.
@cfrank Well, for example, this method seems extremely vulnerable to burial. Also, I would be surprised if it satisfies any form of Participation, even the weaker ones.
There are definitely some interesting ideas here, and I like how you are trying to look at the entire distribution of scores rather than just the average. However, the fact that it does not coincide with majority rule on 2 candidates makes it already a non-starter for me.
@brozai I am curious about why participation would surprise you. Giving a candidate a higher ordinal score will only increase their chances of winning the election. For example, if I score a candidate as S4, then my indication raises the SP ceilings of the candidate at S1, S2, S3, and S4 because of the way SP consent is defined (S0 doesn’t really matter, it can’t be raised or lowered).
Also just know that I am trying to make the strongest case for this concept as I can without being dogmatic, so if I come off that way just let me know. My image for this system incorporates past data into distributions that are more or less stable. In fact the distributions could only be allowed to update after each election, rendering each individual election deterministic but the whole sequence of electoral processes less predictable.
Burial does seem to be an issue, although some form of tactical voting is bound to make an appearance. Not to dismiss it—burial is serious. It is also a risky strategy though, and a STAR modification could help curb the incentive. Also I think effective burial requires information about front-runners, and if there are many different platforms or if the system is multi-winner it becomes less plausible for a rational voter. Still, voters are free to be irrational and/or risky. If you have any other considerations about that or about my rationale there I think it would be constructive for me to hear.
In terms of the majority criterion, I think we may just operate on different paradigms, and just to clarify, the example I gave in the pamphlet was between two candidates who may be in the context of a larger election. There are examples in real life where a majoritarian victory even in an election between only two candidates would be totally anti-social, like the pizza topping problem (3 people plan to pitch in equally for a single-topping pizza, but 2 people prefer a topping the 3rd is allergic to, for example).
There is a book called “Patterns of Democracy” by Arend Lijphart where the distinction between majoritarian democracy and consensual democracy is made clear, and it’s also made clear that consensual democracies are more highly correlated with superior social outcomes than majoritarian ones, and I think that makes sense.
The more I think about it, the more I feel like something like proportional representation makes sense. I just also think that it would be ideal to have the choices of representatives be as “consensual” as possible, but it isn’t easy to determine what exactly that is supposed to mean.
Marylander last edited by
So far I have read the first 5 pages. Here is what I think so far.
Given that the document has the appearance of a scientific paper, it is a bit weird that there are no citations.
Some of the history in the beginning of the paper seems broad and tangential to me. I could be wrong about this, though, so I ask other people who read this to check and see if they agree.
I think that some of the formalization on page 5 is incorrect. I don't think you can take the candidate set to be infinite (at least, not without some further conditions) because it might be impossible to find a winner, for example if every voter prefers C_i to C_j for i < j and the number of candidates is countably infinite, the Pareto criterion would forbid any candidate from being elected.
Extending the number of voters to infinite cases I think might also require some conditions as I suspect issues related to convergence and measurability might come up if it is done haphazardly.
@marylander that’s sensible about the presentation. The pamphlet is not complete and I intend to provide citations where appropriate, but I’m not sure what citations would be needed.
The broad overview is intended for people who are not necessarily familiar with voting theory, and the purpose is to establish the context of the document. I agree that some of it is tangential and I intend to make changes.
Also the formalism is not incorrect, it would just require an appropriate decision algorithm to select a winner from a continuum or may not allow certain criteria to be satisfied in certain cases as you indicated. But for example, if the candidate set is a collection of points in a plane, and the voters assign each candidate a score from a continuum according to distances from certain ideal points, then the decision algorithm might select a candidate that minimizes some chosen objective function of the scores.
But that’s not super relevant anyway, since only finite sets are considered.
@cfrank You may be interested in this notion of "generalized Condorcet winners" via "Borda dominance." A paper on the topic is here https://www.jstor.org/stable/43662517 (let me know if you don't have access and I will get PDF)
Your proposal, and in particular "SP dominance" reminds me a bit of this idea.
Marylander last edited by Marylander
@cfrank said in Mathematical Paradigm of Electoral Consent:
The pamphlet is not complete and I intend to provide citations where appropriate, but I’m not sure what citations would be needed.
Your historical discussion on pages 2-4, for one thing. The definition of STAR voting also probably deserves a citation. Your discussion of the relevance of these ideas to politics also might merit some citations.
The following stipulation is adopted: That if one intends to utilize probability
measures to establish a decision algorithm for an OSS in a democracy, any
utilized probability measure imposed on the electorate should be uniform.
In what way would you impose a probability measure on the electorate? What would you do with it?
In your definition of SP-consent ceiling, did you mean R >= S instead of R > S?
I assume that R needs to be an element of S. Using R > S can lead to some consequences that I am not sure if you intended. For example, in an Approval election in which a candidate gets 65% approval, (0, 0.65) is part of the SP-consent ceiling, as is (0, r) for r in [0.65, 1].
Pages 7-8: This seems to be a lot of loose threads. I think you need to find a point and stick to what relates (although not necessarily supports, discussing contrary perspectives is fine). Things like the role of decision algorithms in machine learning probably should go in its own discussion at the end that could discuss alternate applications of these ideas.
@marylander In a similar vein to "loose threads," I think the connection to compression algorithms is supported only by the fact that the set of winners is smaller than the set of candidates. There might be a stronger philosophical argument to relate proportional representation committees to compression algorithms, but for single winner schemes I cannot see it.
@marylander that makes a lot of sense, I can definitely find good citations for all of those things. Thank you.
In terms of the probability measure, the electorate as I have defined it is a finite set of objects called voters, and any finite set can be equipped with a probability measure to turn it into a probability space. In terms of how it is used, that depends on the decision algorithm. It isn't easy to formalize the concept because decision procedures can get really wild, and I would have to restrict the scope to a specific kind of decision algorithm to say anything much more meaningful. I tried to connect it with Lewis Carroll's desiderata but it isn't formal. It might just be unnecessary.
For the SP consent ceilings you are correct that my meaning has an anomaly, you are also absolutely correct about the intended meaning.
Thank you for your input, I have these concepts floating around in my head so trying to put them down on paper and running them by other people who are knowledgeable and have a fresh perspective is very helpful.
@brozai I think it depends on your perspective. Just as a rough example, one could create a formal model of voters and candidates as having "investment" distributed over "interests," i.e. a set of "interests" and letting each voter be essentially a probability distribution over those interests. Then one could take the sum total of those interest distributions and create an "electoral interest" distribution.
If each candidate is also a probability distribution over those interests, choosing a candidate can be seen as more or less projecting/compressing the electoral distribution into the set of distributions determined by the candidate pool. With this conception a voting system functions exactly as a compression algorithm.
Real life is more complicated than that but I hope that illustrates my thinking better---a candidate's platform can be seen as a (high quality or lousy) compression of electoral interests.
A Former User last edited by A Former User
@cfrank You seem very intent on reformulating all the language, definitions, and algorithms used for voting in terms of probability measures and random variables. Out of curiosity, and I hope this doesn't come off as confrontational, why is that?
It is not less rigorous to just use the conventional definitions used in social choice theory where ballots are weak orders and so on. Similarly, I do not see the use in considering generalizations of the voter / candidate sets to be of arbitrary (infinite) cardinality.
It could possibly be of interest to study limiting behavior of voting rules as the number of candidates or voters grow---for example, studying the probability of a tie or of a Condorcet cycle in the limit of these quantities---but I don't think that's what's happening here.
@brozai said in Mathematical Paradigm of Electoral Consent:
You seem very intent on reformulating all the language, definitions, and algorithms used for voting in terms of probability measures and random variables.
Hopefully this won't come off as a pile-on, but this was something I was confused about as well (regarding a previous paper), it seems way more complex than it needs to be. I compared it to saying that a "randomly selected point in a glass had a 75% chance of being occupied by liquid," as opposed to simply saying that "the glass is 75% full." It strikes me as a very roundabout way of expressing a simple concept.
@brozai I am not trying to reformulate all of the language, definitions, and algorithms for voting in terms of probability measures and random variables. I am proposing a specific paradigm that happens to include an intimate incorporation of probability theory, along with a few specific voting systems that fit nicely within that paradigm. Connecting with a larger mathematical framework allows use of the powerful tools that belong to that framework, and probability theory seems appropriate to me.
I agree that it is not less rigorous, it's equally rigorous. It's just the way I think and express myself, probably because my background is in pure math. If you see a more apt way to describe the concepts I am proposing I would definitely like to hear that. I want to find a higher level of abstraction that can maybe unify some of the things we're looking at in voting theory, if I could find a good theoretical foothold I would be using category theory, but I don't want to go too far off into abstract nonsense that nobody wants to look into.
I don't actually agree that it is very much more complex than it needs to be. As I mentioned in the introduction of the pamphlet, connecting voting theory with probability theory is nothing new. Condorcet was one of the pioneers of voting theory and his Jury Theorem is a direct application of probability theory to voting theory. As another example, Nash's equilibrium theorem is a direct application of probability theory and topology to game theory.
The use of generalization is just that it is more general, and might be more amenable to application in other areas. I mentioned machine learning as one such area. And I want to point out that ordinal scores are different from weak orderings.