Reporting Results from the 2018 DF/RC LEO Survey: The "Bin" Question Reporting Results from the 2018 DF/RC LEO Survey: The “Bin” Question

We are nearing a final release of the 2018 Democracy Fund / Reed College Local Election Official survey. Our current discussion is all about the “bins”. In other words, what is the best way to categorize local election officials, and by implication local election jurisdictions, so as to provide some meaningful categories for comparison but not lump together very disparate locations.

There’s no magic formula for making this choice, as David Kimball and Brady Baybeck showed so effectively in the 2013 Election Law Journal article,”Are All Jurisdictions Equal? Size Disparity in Election Administration.” As of 2008, Kimball and Baybeck showed how the population of local election officials is dominated by small jurisdictions, yet the majority of registered voters live in larger jurisdictions (see Figures 1 and 2, reproduced from their article).

Kimball and Baybeck chose to report their results by “small”, “medium”, and “large” as shown in the figures, because they argued these categories reflected fundamental distinctions in the nature of election administration:

To simplify some of the analyses that follow, we divide the universe of local jurisdictions into three size categories: small (serving less than 1,000 voters), medium (serving between 1,000 and 50,000 voters), and large jurisdictions (serving more than 50,000 voters). We chose 1,000 voters as one divid- ing line because jurisdictions with fewer than 1,000 voters are generally small towns that have no more than a couple of polling places and a handful of poll workers. We expect these jurisdictions to have a different election administration experience than larger jurisdictions. In addition, roughly one-third of the jurisdictions served less than 1,000 voters in recent presidential elections, so this serves as a natural break in the data.

We chose 50,000 voters as the other dividing line because jurisdictions serving more than 50,000 vot- ers tend to be in densely populated metropolitan areas with a large central city. Thus, the largest jurisdictions have different infrastructure and transportation networks than the medium-sized jurisdictions, which are mostly rural and exurban counties. Together, these dimensions characterize what we define as small, medium, and large jurisdictions in a variety of analyses below. The smallest jurisdictions are primarily in the upper Midwest and New England, with a smaller number in the Plains. Large jurisdictions are concentrated in the major metropolitan centers of the United States.

Our 2018 distributions look quite similar to what David and Brady found. Below, we’ve reproduced histograms displays of jurisdictions, first counties and then townships, by populations of registered voters. Most notable is how very many local election officials serve in townships in the United States (predominantly in Michigan, Wisconsin, and in New England) yet how very few (comparatively) voters there are in those jurisdictions (note that we have excluded jurisdictions that serve 100,000 or more registered voters, for the purposes of making the display readable–those jurisdictions administer elections for 35% of all registered voters).

 

While the final report is not yet complete (coming attractions!), we have tentatively decided to split the difference, in hopefully what is an instructive and not Solomonic division.

We will report our results in these bins:

  1. 0-5,000 registered voters. These jurisdictions comprise 25% of our LEO respondents and 2.9% of registered voters
  2. 5,001 – 25,000 registered voters. These jurisdictions comprise 30%of our respondents and serve 12.7% of registered voters.
  3. 25,001 – 100,000 registered voters. 30% of respondents and 18.5% of voters
  4. 100,001 – largest. 15% of responses and 66.9% of voters.

The discerning reader will notice that the last category covers a lot of voters. This is unavoidable, because this category includes not very many LEOs, relatively speaking, and our survey guarantees confidentiality to our respondents. We can report some results with the final bin broken into two smaller bins, but we must honor the commitment we made to our respondents.

This is the reality of American election administration. It’s a classic case where diversity and decentralization are both a source of strength but also can create inequities in funding and voter access.

How Geospatial Mapping Can Improve Election Audits How Geospatial Mapping Can Improve Election Audits

Today’s electionline story describes 25 houses in Hamden, CT that have been incorrectly assigned to election districts since the last redistricting cycle in 2011, and have been the wrong ballots. There are charges that voters have been “disenfranchised” though it’s unclear whether the ballots were counted for the “wrong” race, or only some races were counted.

The process obviously needs to be investigated, and Secretary of State Denise Merrill is calling not just for a detailed investigation of Hamden’s procedures, but a statewide audit when it became clear that there were additional districting errors, including candidates who were elected in districts where they were not residents.

There are a lot of moving parts here and a quick scan of the various stories show an early tendency on the part of journalists to move quickly to use a partisan lens. It appears to be more an accident of unfortunate events that a Democratic registrar in Hamden has been on medical leave during the election, leaving a single person in charge, who happened to be a Republican, and the district was won by a Republican in a relatively close race (though one where the winning margin of 77 votes exceeded the number of voters who were misassigned).

While we will all need to wait to see the outcome of the audit, election scientists have been aware of this mis-mapping problem for a while because of a series of presentations that Dr. Michael McDonald and Dr. Brian Amos have given at our recent conferences.  McDonald and Amos show that mis-assignments are seldom intentional, and most often result from out of date shape files (the geo-spatial files that are used to assign geolocations to larger geographical entities, such as election precincts and districts) and mis-alignments between “street files” (the lists that jurisdictions use to match street addresses to precincts / districts) and the actual geographic boundaries of the district.

There are illustrative examples of these mis-assignments in the paper that McDonald and Amos presented at a the MIT Election Data and Sciences Lab “Election Audits Summit” in December.

I urge interested readers to follow this link to learn more about the great work being done by McDonald and Amos, and how their technology can help to improve election accuracy.

Voter “fraud” report for Presidential commission rife with errors and inaccuracies

(Crossposted to electionupdates.caltech.edu)

I look forward to a more detailed analysis by voter registration and database match experts of the GAI report that will be presented to the Presidential Advisory Commission on Election Integrity , but even a cursory reading reveals a number of serious misunderstandings and confusions that call into question that authors’ understanding of some of the most basic facts about voter registration, voting, and elections administration in the United States.

Fair warning: I grade student papers as part of my job, and one of the comments I make most often is “be precise”. Categories and definitions are fundamentally important, especially in a highly politicized environment like that current surrounding American elections.

The GAI report is far from precise; it’s not a stretch to say at many points that it’s sloppy and misinformed. I worry that it’s purposefully misleading. Perhaps I overstate the importance of some of the mistakes below. I leave that for the reader to judge.

  • The report uses an overly broad and inaccurate definition of vote fraud.

American voter lists are designed to tolerate invalid voter registration records, which do not equate to invalid votes, because to do otherwise would lead to eligible voters being prevented from casting legal votes.

But the report follows a very common and misleading attempt to conflate errors in the voter rolls with “voter fraud”. Read their “definition”:

Voter fraud is defined as illegal interference with the process of an election. It can take many forms, including voter impersonation, vote buying, noncitizen voting, dead voters, felon voting, fraudulent addresses, registration fraud, elections officials fraud, and duplicate voting.8

Where did this definition come from? As the source of the definition, they cite the Brennan Center report “The Truth About Voter Fraud” (https://www.brennancenter.org/sites/default/files/legacy/The%20Truth%20About%20Voter%20Fraud.pdf).

However, the Brennan Center authors are very careful to define voter fraud. From Pg. 4 of their report in a way that directly warns against an overly broad and imprecise definition:

Voter fraud” is fraud by voters. More precisely, “voter fraud” occurs when individuals cast ballots despite knowing that they are ineligible to vote, in an attempt to defraud the election system.1

This sounds straightforward. And yet, voter fraud is often conflated, intentionally or unintentionally, with other forms of election misconduct or irregularities.

To be fair to the authors, they do not conflate in their analysis situations such as being registered in two places at once with “voter fraud”, but the definition is sloppy, isn’t supported by the report they cite, and reinforces a highly misleading claim that voter registration errors are analogous to voter fraud.

David Becker can describe ad nauseam how damaging this misinterpretation has been.

  • The report makes unsubstantiated claims about the efficacy of Voter ID in preventing voter fraud.

Regardless of how you feel about voter ID, if you are going to claim that voter ID prevents in-person vote fraud, you need to provide actual proof, not just a supposition. The report authors write:

GAI also found several irregularities that increase the potential for voter fraud, such as improper voter registration addresses, erroneous voter roll birthdates, and the lack of definitive identification required to vote.

The key term here is “definitive identification”, a term that appears nowhere in HAVAThe authors either purposely or sloppily misstate the legal requirements of HAVA.  On pg. 20 of the report, they write that HAVA has a

“requirement that eligible voters use definitive forms of identification when registering to vote”

The word “definitive” appears again, and a bit later in the paragraph, it appears that a “definitive” ID, according to the authors, is:

“Valid drivers’ license numbers and the last four digits of an individual’s social security number…”,

But not according to HAVA. HAVA requirements are, as stated in the report:

“Alternative forms of identification include state ID cards, passports, military IDs, employee IDs, student IDs, bank statements, utility bills, and pay stubs.”

The rhetorical turn occurs at the end of the paragraph, when the authors conclude that these other forms of ID are:

“less reliable than the driver’s license and social security number standard”. This portion of the is far from precise.

and apparently not “definitive” and hence prone to fraud.

Surely the authors don’t intend to imply that a passport is “less reliable” than a drivers license and social security number. In many (most?) states, a “state ID card” is just as reliable as a drivers license. I’m not familiar with the identification requirements for a military ID—perhaps an expert can help out?[ED NOTE: I am informed by a friend that a civilian ID at the Pentagon requires a retinal scan and fingerprints]–but are military IDs really less “definitive” than a driver’s license?

If you are going to claim that voter fraud is an issue requiring immediate national attention, and that states are not requiring “definitive” IDs, you’d better get some of the most basic details of the most basic laws and procedures correct.

  • The authors claim states did not comply with their data requests, when it appears that state officials were simply following state law

The authors write:

(t)he Help America Vote Act of 2002 mandates that every state maintains a centralized statewide database of voter registrations.14

That’s fine, but the authors seem to think this means that HAVA requires that the states make this information available to researchers at little to no cost. Anyone who has worked in this field knows that many states have laws that restrict this information to registered political entities. Most states restrict the number of data items that can be released in the interests of confidentiality.

Rather than acknowledging that state officials are constrained by state law, the authors claim non-compliance:

In effect, Massachusetts and other states withhold this data from the public.

I can just hear the gnashing of teeth in the 50 state capitols.I am sympathetic with the authors’ difficulties in obtaining statewide voter registration and voter history files. Along with the authors, I would like to see all state files be available for a low or modest fee, and to researchers.

There is no requirement that the database be made available for an affordable fee, nor that the database be available beyond political entitles.  These choices are left to the states.  it is wrong to charge “non-compliance” when an official is following statute (passed by their state legislatures).

I don’t know whether the report authors didn’t have subject matter knowledge or were purposefully trying to create a misleading image of non-cooperation with the Commission.

  • The report shows that voter fraud is nearly non-existent, while simultaneously
    claiming the problem requires “immediate attention”.

But let’s return to the bottom line conclusion of the report: voter fraud is pervasive enough to require “immediate attention.” Do their data support this claim?

The most basic calculation would be the rate of “voter fraud” as defined in the report The 45,000 figure (total potential illegally cast ballots) is highly problematic, based on imputing from suspect calculations in 21 states, then imputed to 29 other states without considering even the most basic rules of statistical calculation.

Nonetheless, even if you accept the calculation, it translates into a “voter fraud” rate of 0.000323741007194 (45,000 / 139 million), or three thousandths of a percent.

This is almost exactly the probability that you will be struck across your whole lifetime (a chance of 1 in 3000 http://news.nationalgeographic.com/news/2004/06/0623_040623_lightningfacts.html)

I’m not the first one to notice this comparison—see pg. 4 of the Brennan Center report cited below. And here I thought I found something new!


There are many, many experts in election sciences and election administration that could have helped the Commission conduct a careful scientific review of the probability of duplicate registration and duplicate voting.  This report, written by Lorraine Minnite more than a decade ago lays out precisely the steps that need to be taken to uncover voter fraud and how statewide voter files should be used in this effort. There are many others in the field including those worried about voter fraud and those who are skeptics of voter fraud who have been calling for just such a careful study.

Unfortunately, the Commission instead chose to consult a “consulting firm” with no experience in the field, and which chose to consult database companies who also had no expertise in the field.

I’m sure that other experts will examine in more detail the calculations about duplicate voting. However, at first look, the report fails the smell test. It’s a real stinker.


Paul Gronke
Professor, Reed College
Director, Early Voting Information Center

http://earlyvoting.net

Early Voting: An Advantage for Republicans? Early Voting: An Advantage for Republicans?

screen-shot-2017-04-27-at-9-56-15-am

 

 

 

 

The research team at the Elections Research Center at the University of Wisconsin, Madison, have a new paper analyzing the partisan impact of early voting laws, in combination with a set of other election reforms. The abstract is provided below; the piece is gated at the Political Research Quarterly but may be available from the authors. 

I would still caution against overinterpreting these results as providing a roadmap for election law gamesmanship. Burden et al. spend a bit too much time, in my judgment, opining about how partisan actors may or may not misestimate the political impact of reforms to election laws, without acknowledging the highly contingent and dynamic nature of the legal and administrative environment. 

For example, it’s almost certain than when a new voting method is made available, strategic political actors from both parties look at these changes, look at what groups opt for one or another method, and start to change their campaigns accordingly. Capturing this kind of institutional dynamic is nearly impossible to do in a national study like this, and can easily make gamesmanship seem a lot simpler than it actually is. 

 

 

 

How is Oregon Motor Voter affecting different counties? How is Oregon Motor Voter affecting different counties?

(This is a guest posting from Nick Solomon, Reed College senior in Mathematics)

One of our first assignments in our Election Sciences course was to take a look at the Oregon Motor Voter data and try and tease out any patterns we could find in it.

screen-shot-2017-02-26-at-7-50-11-pmI’ve always been interested in geographic statistics, so I decided to examine Oregon counties. This can be especially valuable because geography tends to to be a good proxy for making inferences about demographic variables we might not have access to, like income, race, or education level (none of these are accessible via the Oregon statewide voter registration file).

The figure displays party of registration among citizens registered via OMV.  It’s important to remember when looking at the graphic that the OMV process initially categorizes all citizens as “NAV” (non-affiliated voters), and citizens must return a postcard designating a party. As of January 2017, as shown on the left, 78% of registrants did not return the card, and only 11% decided to select a party. 

Bar plot of percent of voter registered via OMV by county

 

The county by county totals are fascinating. OMV voters constitute the highest percentage of registered voters in Malheur county. Many readers may recognize the name–the Malheur National Wildlife Refuge was the site of a 41 day standoff between law enforcement and a small group of occupiers.  

Malheur is located in the farthest southeast corner of the state. It’s rural, relatively poor, and much more Republican than the rest of the state. John McCain received 69% of the vote in Malheur in 2008. 

In an upcoming blog post, another student will be posting a map of this county by county visualization, and it’s apparent that a number of rural counties have high percentages of OMV registrants. 

At the recommendation of a few experts who looked at the graphic I decided to examine the percentage of OMV voters by county versus the total number of registered voters. This lets us get a sense of whether Malheur is an outlier caused by a very small sample size making the percentage value overly sensitive or if this is a number that we can trust.

A scatter plot of log county population and percent of voters registered via OMV

Here, the total number of voters is plotted on a log scale, as many counties have smaller numbers of voters, while the Portland metro area has many more.

The log scale allows us to get a better sense of any relationship between number of voters and percent registered by OMV without the few large numbers dominating the plot.

This graphic shows that there are quite a few counties of similar size to Malheur, and some that are even smaller. Furthermore, we see that Malheur is not very far from other counties of its size.

Finally, to my eye, there seems to be no meaningful relationship between these two variables, so I find myself concluding that Malheur county, along with Umatilla and Morrow and Curry and Coos are experiencing a much greater benefit in access to voter registration than some larger, more urban counties.

For those interested, these graphics were made with R and ggplot2. I’ll be posting on my personal blog with more details about how I made them.

Eventually, I hope to learn more I was also curious about hoe OMV might be affecting party turnout at the polls. Keep tuned for future updates!

Early Voting Under Consideration in the Northeast Early Voting Under Consideration in the Northeast
Image courtesy of the National Conference of State Legislatures (http://www.ncsl.org/research/elections-and-campaigns/absentee-and-early-voting.aspx)

Image courtesy of the National Conference of State Legislatures (http://www.ncsl.org/research/elections-and-campaigns/absentee-and-early-voting.aspx)

A number of Northeast states are considering adding or expanding early voting, according to a story in The Hill. 

I hope that administrators and legislators in the states make sure they make a decision based on comprehensive and accurate information and not rely on anecdote.

Most importantly, early voting has a complicated relationship to overall voter turnout.  Most studies show a small but positive relationship, though one prominent study reports a negative relationship. If you put in more early voting locations, more citizens vote early (but it’s not clear if more voters overall cast a ballot). 

Jan Leighley and Jonathan Nagler put it best in a recent blog posting (in the context of voter registration laws): higher turnout depends mostly on parties and candidates, not on changes to voting laws. 

The point?  New Hampshire Secretary of State Bill Gardner is quoted in the story and his statement reflects many common misconceptions about early voting: 

“We’re seeing turnout nationally go down in each of the last three elections even as more and more states rush to make it easier to vote by having early voting,”

Misconception 1: there has been no “rush” to add early voting options since 2008. The rate of states adding early voting provisions has slowed substantially as we get down the final 13 holdouts (according to the National Conference of State Legislatures, 37 states plus DC offered some form of early voting in 2016, compared to 36 plus DC in 2012, and 34 in 2008).

Misconception 2: turnout has not declined for the last three cycles. Final totals in 2016 appear to be slightly up from 2012 and about 2% lower than 2008.  

Misconception 3: national turnout is the best way to understand the impact of state and local laws. National totals disguise enormous variation in turnout between and within states, competitiveness in statewide races, and differences in rules and laws.  There is also some scattered evidence that early voting benefits some subpopulations more than others, and this can be overlooked in national and even statewide totals. 

The second point in the article is harder to address: the costs of early voting.  Michael McDonald suggests that there is resistance to early voting in the Northeast because most of these states administer elections at the township level.  McDonald is right to highlight the importance of providing sufficient funding to jurisdictions to conduct elections, regardless of what options are offered (budgets were the most common point of discussion at a recent NCSL gathering).  

All I’d add here is that we don’t have a clear sense of how much early voting costs, and whether cost savings can be obtained by strategically reallocating resources between early voting and election day voting (though mis-forecasts of voting turnout can turn disastrous). 

The takeaway is that states considering adding early voting options should consider them mostly on the grounds of voter convenience, on how well the options can be adapted to the conditions faced by local jurisdictions, and only lastly on how they may increase overall turnout.