Answers to All the Supreme Court’s Questions About How to Fix Partisan Gerrymandering

AFP_P57AZJustice Samuel Alito Jr. stands for an official photo with other members of the Supreme Court on June 1 in Washington.

Saul Loeb/AFP/Getty Images

As one of the attorneys for the plaintiffs, I was able to attend Tuesday’s oral argument in Gill v. Whitford. At the argument, the justices probed, among other things, how the plaintiffs’ test for partisan gerrymandering would work, how reliable the social science is that underpins this test, and what the test’s implications would be for judicial involvement. Since the plaintiffs’ theory relies in part on my academic work, I’m in a good position to address these issues.

With respect to the test’s operation, Justice Gorsuch warned that a gerrymandering standard should not be like a “steak rub.” That is, it should not be imprecise and opaque in its makeup: “I like some turmeric, I like a few other little ingredients, but I’m not going to tell you how much of each.”


In reality, the plaintiffs’ proposed test for adjudicating gerrymandering claims is more akin to a detailed recipe than a mystery stew. The test has four elements, and litigants would be required to go through them one by one, proceeding to the next phase only if they satisfied the previous criterion. These four elements are:

  1. Was the district plan enacted with the discriminatory intent of benefiting one party and handicapping another one? Maps drawn by a single party in full control of the state government often (but not always) have this motive.
  2. Has the plan exhibited (or is the plan forecast to exhibit) a historically large partisan asymmetry? A partisan asymmetry means a map does not treat the parties equally in terms of how their votes translate into seats. A map’s asymmetry can easily be calculated and then compared to historical data to determine if it’s unusually big.
  3. Is the plan’s partisan asymmetry durable? To find out, a range of plausible election results should be considered. A map’s asymmetry should be deemed persistent enough only if it would endure across this range of outcomes.
  4. Is the plan’s partisan asymmetry unjustified? At this final step, the gold standard is to use a computer algorithm to simulate many maps that satisfy the state’s legitimate redistricting criteria. The challenged plan’s asymmetry is unjustified only if it exceeds that of most of the simulated maps.

Under this approach, there would be some easy cases, like the Wisconsin State Assembly plan at issue in Whitford. This plan’s authors admitted its pro-Republican intent. Its partisan asymmetry is worse than that of any map nationwide between 1972 and 2010. Its asymmetry would persist even if there was a massive Democratic wave. And its asymmetry is larger than that of any simulated assembly map. Conversely, it’s clear a plan would be upheld if it was designed through a bipartisan or nonpartisan process, if its asymmetry was historically small, if its asymmetry would disappear under slightly different electoral conditions, or if it was no more asymmetric than most simulated maps.

There would also be hard cases under the plaintiffs’ test. What if a plan is only somewhat more asymmetric than most maps historically? What if the various ways we can measure asymmetry don’t agree about a plan’s skew? What if a plan’s asymmetry would decline, but not disappear, under different electoral conditions? But these scenarios don’t turn the test into Gorsuch’s steak rub. They just mean that the test—like all legal standards—requires some judicial discretion in some factual circumstances. This is how all of constitutional law works.


Next, take the social science the plaintiffs employed, in particular their preferred asymmetry metric: the efficiency gap. (For more on the efficiency gap, see this law review article I co-authored.) Justice Alito questioned whether the efficiency gap should be seen as “the Rosetta stone” of partisan gerrymandering. He asked how it handles “elections that are not contested,” why the benchmark for tallying “surplus votes” should be “50 percent of the votes, instead of the votes obtained by the runner-up,” and what to make of the finding by the efficiency gap’s creator, Eric McGhee, that “the effects of party control on bias … decay rapidly.”

This is not the place to get all the way into the wonky weeds. (For that, see McGhee’s amicus brief.) But there are good nontechnical responses to all of Alito’s concerns. As to uncontested races, all asymmetry metrics (not just the efficiency gap) have to estimate their outcomes in some fashion or another; otherwise one gets the wrong impression that voter opinion is unanimous in a district with just one candidate. Fortunately, most methods lead to about the same conclusions for uncontested races. As a result, there is an extremely high correlation, 0.98, between the calculations of the plaintiffs’ expert (using one estimation technique) and McGhee’s earlier scores (using a different one).

As to the definition of surplus votes, common sense shows why McGhee’s approach is correct. Say a candidate won a district 60 percent to 40 percent. Did this candidate receive 10 or 20 percentage points more than she needed for victory? The answer is plainly 10. If the candidate had gotten 20 fewer percentage points, most of those would have gone to her opponent, and she would have lost the election.

And as to McGhee’s finding, it indeed shows that not all efficiency gaps endure for the rest of the decade. This is precisely why the last step of the plaintiffs’ test is necessary: to make sure that a plan’s asymmetry is durable rather than transient. McGhee’s paper also only covered plans from the 1970s, 1980s, and 1990s. Since then, there has been a dramatic rise in the persistence of plans’ efficiency gaps, thanks to more sophisticated technology and voters’ greater partisanship. Wisconsin’s State Assembly plan is a case in point; it has favored Republicans in three straight elections—and will continue to do so under just about any electoral scenario.


Lastly, consider Whitford’s potential implications for the judiciary. Chief Justice Roberts worried that if the plaintiffs prevail, then “there will naturally be a lot of these claims raised around the country.” Even more troublesome, according to the chief justice, is the prospect that “the intelligent man on the street” might think the high court’s rulings arose “because the Supreme Court preferred the Democrats over the Republicans,” or vice versa. This perception could “cause very serious harm to the status and integrity of the decisions of this court in the eyes of the country.”

Predictions are difficult, especially about the future. But based on the historical record, the number of viable partisan gerrymandering claims would likely be small under the plaintiffs’ test. The plaintiffs’ expert studied more than 200 statehouse plans from 1972 to 2014. Of these, only about 40 were designed by a single party and then exhibited a large partisan asymmetry. And of these, several more were not durable in their effects or could be justified on geographic or other grounds. An upper estimate of the test’s potential impact is thus around a half-dozen invalidations per decade.

But Frankfurter was wrong then, and there is no reason to think his premonition is right now. By ending the malapportionment that plagued mid-20th-century America, the Supreme Court terminated a harmful practice that no other actor could realistically combat. The result was glory for the high court and a marked improvement in American democracy. The same sequence is quite possible here. Partisan gerrymandering is at least as subversive as unequal district population. It too cannot plausibly be stopped by any nonjudicial body. And if it were stopped, the benefits for American democracy would be profound. “The precious right to vote,” in Justice Ginsburg’s words on Tuesday, would finally be vindicated.

Powered by WPeMatico