As part of Michael Shepherd’s story on ranked-choice voting we surveyed our readers to learn how they would have ranked the candidates for governor in both the 2010 and 2014 elections. Then we analyzed the responses to make our best guess at how these races would have looked if they were run under the ranked-choice process proposed in the forthcoming Question 5.
The first issue that we had to address was of selection bias. The people who graciously responded to our call for a new perspective on an old race were not a representative sample of all Mainers. The respondents tended to be: active online, politically interested and interested in ranked-choice voting in particular. This sample, of course, did not tend to include those who are: not active online, politically disaffected or people who favor other media outlets. This is self-selection bias.
If we could transport back in time and re-run these races, we would expect the first round results in the ranked voting process to nearly match the original results. For example, in the actual 2014 election Republican Paul LePage won with 48 percent of the vote, Democrat Mike Michaud took 43 percent, and independent Eliot Cutler garnered 8 percent. However, our results gave LePage 7 percent, Michaud 64 percent, and Cutler 27 percent.
Obviously that’s a poor match to the expectations and a clear indication of bias in the sample in the first rank position. However, all is not lost to this bias. We relied on the responding LePage supporters to represent the interests of all LePage supporters. Likewise, Cutler and Michaud supporters represented their respective interests.
To show how the ranked-choice process would work, we simulated the 2010 and 2014 gubernatorial races starting with the actual results. , The remaining ranks were guided by “paired-preference” distributions, helped us to work around the issue of selection bias in our sample.
A paired-preference is a distribution that gives the probability that candidate B will be ranked after candidate A. Say, LePage is in the first position on a ballot. What is the probability that Cutler will be next? On a different ballot, if Michaud is second, what is the probability that LePage will be next?
We analyzed the results as pairs of preferences because we needed to see how each candidate was likely to stack up against each of the other candidates.
Even though there is self-selection bias in the first rank responses that favors Democrats and independents, there is no indication that bias is a factor in the paired-preferences. This is because we are using self-selected samples that, by design, cover the population’s sample space. Even though the top rankings are dominated by responses favoring Democrats, the Republicans who did respond most likely chose second, third, etc. ranks in the same way that Republican voters would have at the original election time.
It doesn’t matter that we received more responses from Democrats and independents in this survey than Republicans. The Democrats and the Republicans, defined by their first-round selection, remain segregated.
Keep in mind, the Democrat respondents can’t change the Republican respondents’ general preference for lower rank choices, unless they had selected a Republican first And that would make them a Republican for this race.
If there were bias in the paired-preference distributions, how would it manifest? With enough Republican responses, we would see preferences that are well out of line with our general expectations. For example, in the 2010 race on the Republican ballots, we would not expect to see Democrat Libby Mitchell strongly favored over the other candidates. What we see, however, is no strong preference for any one candidate in the lower rank positions at all. This is good news.
We built the distributions using a process called bootstrapping. Bootstrapping allows us to ask the survey again and again many times by drawing samples from the original population.
And for the 2014 race, since there are only six plots, they are shown here all on the same graph.
The process of bootstrapping is not especially complicated, but it is a very powerful tool for statisticians. It helps us build distributions by re-sampling data many times over. Here’s how it works.
In the original data we were able to find the probability of each permutation of two candidates, say LePage followed by Moody. This is calculated by counting the number of L’s followed by M’s in the ballots, and dividing that count by the total number of ballots. Let’s represent that as p(LM), the probability of “L” followed by “M”. We can find the probabilities for all possible pairs from the survey data by this same process of counting and dividing by the total number of ballots.
In the 2010 election, there are 20 permutations for the five candidates, thus giving us 20 probabilities. We want a distribution over each of these probabilities so that we can tell if there is a statistically significant difference between them, and so that we can pull values from each distribution to include random variation the simulated race.
The first time we calculated the p(LM) probability was one possible outcome. If we resample the survey data with replacement, that gives us the first bootstrapped sample. We record that new, slightly different p(LM), and run the process all over again. After resampling and recalculating 1,000 times, then we can closely approximate the population’s distribution over p(LM). Of course this is all done in a computer program and not by hand.
You can visualize how this works with a good ol’ game of Connect Four. Each checker is a calculated paired-preference, p(AB). The slots in the Connect Four board correspond to a probability value. Each time we find p(AB) drop a checker in the slot closest to the calculated value. The checkers won’t all be in the same slot, but the will stack up in the most common one. If our board is big enough, and has enough slots, then we can nicely represent a smooth curve.
Because we have clear self-selection bias in the top rankings among our respondents’ ballots, we couldn’t accurately simulate a re-run of either the 2010 or 2014 races with these ballots alone. Instead we used the original split of votes from each year’s results, and filled in the latter rank positions based on the preferences above. With the top of the rankings coming from the original races, distributions calculated, and the process in place to establish rankings, we can get started with the race simulations.
The winner is Eliot Cutler (Independent) with 303,424 votes.
The winner is Paul LePage (Republican) with 322,846 votes.