Recursive IRV

(this was previously referred to as "Hare squared", at least with regard to its first level of recursion depth)
This method does something that I think is entirely new: it determines the quality of its output based solely on an input parameter. The higher the number, the better quality. Note that if you supply the value 0, it will simply give you plurality winner. A value of 1 gives you IRV. 2 and above are where the new stuff kicks in. The higher the value, the more processing is necessary, (exponentially) so after 4 or 5 it may get pretty impractical, depending on number of voters and especially number of candidates.
Background: (simple explanation of IRV, for comparison)
IRV improves upon plurality (i.e., a count of first choice votes) with a form of iterative refinement. It still uses plurality, but in a "process of elimination" loop, giving better results.... results that, among other things, reduce vote splitting and the incentive for voters to vote strategically.
The new method:
Recursive IRV is a way of continuing this process of refinement by doing an inner process of elimination, in order to determine which candidate to eliminate in the outer process of elimination. So instead of using plurality to determine the "bad candidates" to eliminate, it uses plurality to determine the "good candidates" to eliminate from the determination of which are "bad candidates," and therefore to be eliminated.
To the degree that IRV improves upon plurality, this method can improve upon IRV.
and by extension....
There is no limit to the depth that we can descend, other than computer power. The more we descend, the more we dilute the impact of it being based on plurality. This in turn reduces the voter's incentive to strategically vote (such as by exaggerating differences between perceived frontrunners).
Each time we descend another level, it alternates between using plurality and "antiplurality" in the innermost loop, the latter finding bad candidates rather than good candidates.
The elimination process, when eliminating good candidates, will look a bit different from the process when eliminating bad candidates. Most of the changes happen at the beginning of the elimination process rather than toward the end, since the popular candidates will be eliminated first, sending large numbers of votes to other candidates. This is perfectly fine, but just different.
A byproduct of this process is that the method will be Condorcet compliant, which regular IRV is not. But again, it goes beyond that, continuing to improve the quality of the results even when no candidate is a Condorcet winner. It is to be seen if this improves the results indefinitely, but it's a good bet that it keeps getting better at each recursion depth.
Arrow's theorem
This may be a way to effectively get around Arrow's theorem, which proves we can never completely get rid of this strategic incentive. Remember, there is nothing in the theorem that proves that we can't continue to reduce that incentive. closer and closer to being nonexistent. Other voting methods, such as Schulze Sequential Dropping, attempt to do so with increasingly complex logic, that is hard to explain, hard to put into programming code, and hard to put into legal code. This one keeps the explanation and code relatively simple, and determines the degree of refinement simply by setting a parameter, the recursion depth.
That's not to say it is yet proven that it eliminates all strategic considerations, even at the hypothetical infinite recursion. (I'm going to need some other minds on that project. @AndyDienes? )
As a way of bringing together competing camps of the voting reform community
Here is possibly the real value of the method. Not so much in actually being a practical method for public elections, but as an academic exercise in service of an important persuasive argument.
For instance, there is a certain value to this as a way to convince IRV fans that IRV can be made Condorcet compliant, without doing anything beyond applying "more IRV." Those who are in love with IRV might be convinced that this (and Condorcet methods in general) is simply "more of a good thing."
Meanwhile, Condorcet fans might see this and change their perception of IRV: rather than thinking it is a irreparably flawed mechanism, they can think of it as a good mechanism that just isn't applied strongly enough.
The CodePen:
This is a work in progress. https://codepen.io/karmatics/pen/PoRvYaX
It needs better output. But you can see (if you look carefully) that it gets the "right results" for Burlington 2009.
This one has more readable output, but it hard coded to a depth of 2. (the first one will eventually be able to just set a parameter for recursion depth....needs a bit more work) https://codepen.io/karmatics/pen/bGvZKqP

I'm thinking there must be a way to specify the recursion such that it will bottom out.

@jackwaugh Yes if by "bottom out" you mean it won't change the ordering anymore, that should happen with just a few levels deep.

@rob said in Recursive IRV:
Recursive IRV is a way of continuing this process of refinement by doing an inner process of elimination, in order to determine which candidate to eliminate in the outer process of elimination. So instead of using plurality to determine the "bad candidates" to eliminate, it uses plurality to determine the "good candidates" to eliminate from the determination of which are "bad candidates," and therefore to be eliminated.
To the degree that IRV improves upon plurality, this method can improve upon IRV.I apologize, but I do not think I really understand what your method does. Especially this "it uses plurality to determine the "good candidates" to eliminate from the determination of which are "bad candidates," and therefore to be eliminated" part seems quite vague. Is there an easy fully description of the method?

@spelunker Maybe if I can convince myself to get off my butt and achieve things, I will get around to coding this in JS. Currently, I am rewriting part of the simulation framework, then I will return to debugging bottomtwo runoff.
@rob means he wants to apply the Hare method recursively. A given specification of the recursive method might put a fixed limit on the depth of recursion. The bottom will just use Hare. If not at the bottom, conduct an IRV election among the remaining candidates to decide which one to eliminate from the list for the subsequent round of tallying.

@jackwaugh said in Recursive IRV:
@rob means he wants to apply the Hare method recursively. A given incarnation of the method might put a fixed limit on the depth of recursion. The bottom will just use Hare. If not at the bottom, conduct an IRV election among the remaining candidates to decide which one to eliminate from the list for the subsequent round of tallying.
I am still not sure if I fully understand. "If not at the bottom, conduct an IRV election among the remaining candidates to decide which one to eliminate from the list for the subsequent round of tallying." This doesnt really make sense to me. Wouldnt this just eliminate a candidate with lowest plurality score?

@spelunker No, that is not the same thing. You might as well say that you could skip the top application of Hare and just elect the candidate with the highest plurality score and that would be the same as Hare.

The recursive method alternates between looking for a good candidate and looking for the worst candidate.
I have not checked @rob's assertion that just one level of recursion will make the system comply with Condorcet.

Now I am fully confused. Would you be able to give a complete (and selfcontained) description of the rule? (Pseudocode would also work)

@rob , this is kind of the key section of your code for this, right?
doEliminationLoop = (candidates, ballotsExp, isNegative) => { var results = { rounds: [] }; var numCandidates = candidates.length; for(var i=0; i<numCandidates; i++) { var candidatesSorted = // (depth>0) ? //sortedCandidatesByEliminationRounds(candidates, ballotsExp, true, 1) : sortedCandidatesByFirstPlaceVotes(candidates, ballotsExp); console.log(depth + " " + isNegative) console.log(JSON.stringify(candidatesSorted,0,2)) var toBeEliminated = candidatesSorted[(isNegative)?0:candidatesSorted.length1]; results.rounds.push(candidatesSorted); //setValue('output_2', JSON.stringify(candidatesSorted,0,2)) // maybe use Array.filter? candidates = []; // rebuild array without winner for(var j=0; j<candidatesSorted.length; j++) { if(candidatesSorted[j].name != toBeEliminated.name) { candidates.push(candidatesSorted[j].name); } } } results.winner = results.rounds[results.rounds.length1][0] return results }
His condition on
isNegative
is key here. He's sorting the candidates the same way, based on top rankings, but he's picking whom to eliminate from the list from either the top or the bottom of the list. 
@spelunker I could be vague on some details and we might need @rob to clarify, unless one of us wants to fully reverse engineer his code and see why it produced different results for the Burlington, Vt. election than what you would predict, @spelunker.

This still very much seems like IRV and not Condorcet compliant though. Just imagine you have two voters and three candidates with preferences a>b>c and c>b>a. Wouldn't this method delete b in the first step which was the condorcet winner?

Thanks for giving an example to work.
Hare to the zeroth power is chooseone plurality.
Hare to the first power is Hare.
Hare squared does about (n  1)(n  1)/2 rounds, concerned with promotion of candidates rather than elimination of them.
I have three candidates and want to get down to two. Whom do I promote first? It's a tie between a and c. I have to appeal to @rob for tiebreaking rules, or make up my own.
Can you give an example that doesn't tie at any stage?

@spelunker Hey sorry if my explanation wasn't clear. I actually started to develop a much better version, where you could choose any depth, and (the hard part) it had the ability to show output so it was understandable, but then I messed up and lost my code or something and dropped it for a while. (I'm sure I'll come back to it)
Look at it this way. What IRV attempts to do, prior to doing the final calculation (i.e. "determine who is preferred by more people"), is to remove irrelevant alternatives so the calculation is not affected by them. But it does the elimination rather crudely, based on "negative plurality". (least first choice votes)
This just takes it to another level, using the same logic for the elimination as it does for the final determination. As you go to further and further recursion depths, this should get more and more accurate, as the "crude part" is less and less relevant.
My general hypothesis is that Condorcet compliance is simply an easy to define step on the way to our goal of "immune to vote splitting." IRV logic is obviously closer to this goal than plurality, but since it can result in missing the Condorcet winner, it isn't very far toward the goal. Applying the logic twice appears to make it Condorcet compliant. Applying it 3 or more times moves it ever closer to the goal.
Since it is technically impossible to apply it an infinite number of times, this doesn't violate Arrow's theorem (and the like).

@rob This is sort of like engineered chess programs. They look ahead a certain number of moves and then resort to crude heuristics.
The ones that are not "engineered" use connectionism and/or Darwinism, but techniques like that are not relevant here. Before people tried those, they were engineering the solution, i. e. designing it top to bottom by hand.

@spelunker said in Recursive IRV:
Just imagine you have two voters and three candidates with preferences a>b>c and c>b>a. Wouldn't this method delete b in the first step which was the condorcet winner?
It might, but I don't consider that a good example since to me it is (or should be) a three way tie.
If you have a case where there are no pairwise ties, and there is a Condorcet winner which it misses, I'd be interested in seeing it.

I believe it's a Copeland tie and a DasguptaMaskin tie.
a vs. b: no winner.
a vs. c: no winner. c vs. b: no winner.


I wonder whether it can be proven that adding more and more layers of recursion eventually converges on a fixed outcome, for any given set of ballots.

@jackwaugh said in Recursive IRV:
I wonder whether it can be proven that adding more and more layers of recursion eventually converges on a fixed outcome, for any given set of ballots.
I'd love to see it proven. It seems to me that it is obviously true, but it is beyond my capabilities to actually prove it. Currently I am relying on intuition as well as just testing it on sample data.

@rob said in Recursive IRV:
@jackwaugh said in Recursive IRV:
I wonder whether it can be proven that adding more and more layers of recursion eventually converges on a fixed outcome, for any given set of ballots.
I'd love to see it proven. It seems to me that it is obviously true, but it is beyond my capabilities to actually prove it. Currently I am relying on intuition as well as just testing it on sample data.
I could try to prove it, but I might need a more formal definition of the method for it. If possible not in JavaScript