
This section of the Topics division includes reports and analyses of various proposals for election auditing methods.
NEW, June 1, 2008:
What Constitutes and Election Audit? [1]
Presentation by R. H. Phillips at the "Building a New World" Conference, Radford, VA, May 24, 2008
As an introduction to the subject of election auditing, EDA recommends this essay by Richard Hayes Phillips, What Constitutes an Election Audit? [1] With the help of Ohio volunteers, Phillips conducted an audit of the actual ballots cast in the 2004 Ohio presidential elecion, proving beyond a shadow of a doubt that the 2004 election was stolen in Ohio.
Presented at the Building a New World Conference
Radford, Virginia, May 24, 2008
What Ohio citizens conducted, under my direction, was a genuine audit of the 2004 presidential election. This was no mere “spot check” of randomly selected precincts, and no mere “recount” of the same ballots previously run through the electronic tabulators.
We learned to ask for everything: ballots, poll books, voter signature books, ballot accounting charts, packing slips, and invoices. We asked to see all the ballots, whether voted, spoiled, or unused. And we always asked to photograph the records, so that I could analyze them with painstaking accuracy, and reexamine the same records when necessary.
We lacked the element of surprise, as a list of requested precincts was almost always demanded in writing and well in advance. But I picked the counties, and I picked the precincts, and I rarely said how or why. “What are you looking for?” I was often asked. “I don’t know,” I would reply. “This is an audit.”
When the IRS audits your tax returns, you don’t get to pick the year, or decide which records to show them. They want to see everything. And so did we.
When election results have been altered, this will almost always be apparent at the precinct level. Either the numbers will be at variance with long-established voting patterns, or inexplicable combinations of choices will be attributed to the same voters on the same day, or both. Voter turnout, that is, the percentage of registered voters casting ballots, may be suspect, either too high or too low. The percentage of ballots recorded as having no choice for the office, equal to undervotes plus overvotes, may be anomalously high or low. Based upon these criteria, we audited the most suspect precincts.
All of the records we requested are important. It is rightly the responsibility of election officials to verify the accuracy of the elections they administer.
The ballot accounting charts for each precinct should state the number of ballots received at the start of the day, which should match the number on the itemized packing slip from the printer who supplied the county with all its ballots. That same chart should state the total number of “voted” ballots, which should equal the number of names in the poll book and the voter signature book. It should state the number of “spoiled” ballots, which should match the number of altered ballot stub numbers recorded in the voter signature book. And it should state the number of “unused” ballots remaining at the end of the day, which, when added to the number of “voted” and “spoiled” ballots, should equal the total number of ballots received at the start of the day. Without these records there is no way to tell if the ballot box contains too many ballots, or too few.
Ballot stubs are numbered strips of paper attached to each ballot. The stub number for each ballot issued, both “voted” and “spoiled,” should be recorded by a poll worker right next to the voter’s name in both the poll book, written by the poll worker, and the voter signature book, signed by the voter. The ballot stub should be torn off and placed into the ballot box separately, to protect voter privacy and the right to a secret ballot. The numbers on the torn-off stubs should match the stub numbers in the poll book and the voter signature book, and the numbers on the stubs still attached to the unused ballots should not; and all the stubs, and all the ballots, whether voted, spoiled, or unused, should be preserved. Without these records there is no way to tell if the ballots run through the electronic tabulator are the same ballots issued to the voters.
In Ohio, Boards of Elections are at liberty to “remake” ballots at their discretion, ostensibly so that the voter’s intent will be accurately recorded by the electronic tabulator. In the counties we audited, the number of “remakes” or “duplicates” ranged from a mere handful to more than one percent of the total ballots cast in the entire county. The original “spoiled” ballots which the “remakes” allegedly duplicate are supposed to be preserved. We never saw any of them. Without these records, there is no way to tell if the “remakes” are legitimate.
The subsets of regular, absentee, and provisional ballots in each precinct are also supposed to match the corresponding numbers of names recorded in the poll book and the voter signature book. If the books do not indicate which absentee ballots were returned by the voters and which were not, and which provisional ballots were approved and which were not, another opportunity arises for alteration of the vote count.
The ballots for each precinct must be kept in the same sequence in which the auditor found them. Failure to do so can compromise the evidence. Long consecutive runs of ballots for one candidate or another are proof of hand sorting, for which there might be no legitimate reason. Abrupt changes in voting patterns partway through the stack of ballots may be indicative of ballot tampering, especially if there is a marked increase or decrease in “ticket splitting.” This is why “whole ballot analysis” is essential. The combinations of choices attributed to individual voters on each ballot must be examined, not merely the contest being investigated.
Ballots from numerous counties must be examined. Unless this is done, there is no frame of reference, and there is no way to tell if ballots are counterfeit. Likewise, all the marks on the ballot must be examined, to see if one or more of the marks are made by a different hand than the others. Such forgeries can be a method for spoiling the ballot by turning the voter’s choice into an “overvote,” or by turning an “undervote” into a vote for the candidate desired by the election riggers.
One cannot overstate the importance of the chain of custody for the ballots, as it is here that the opportunity for election rigging arises. Lapses in the chain of custody after the ballots leave the polling place on Election Night provide the opportunity for ballot tampering prior to tabulation, in which case a subsequent hand count will nicely match the tabulator count. Lapses in the chain of custody after the ballots are tabulated provide the opportunity for ballot substitution in order to get the ballots to match a rigged tabulator count. And the greater the number of “extra” ballots ordered by the Board of Elections, above and beyond what could possibly be needed to accommodate all the voters, the greater the margin by which the vote count can be altered. All that is needed to cover the tracks is to destroy the unwanted ballots and the unused ballots, or to leave the “extra” ballots off the invoice and the packing slip in the first place.
Despite the numerous methods of ballot tampering practiced in Ohio, doing away with paper ballots is not the solution. Quite the contrary; the fact that eighty-five percent of the votes in Ohio in the 2004 election were cast on paper is what made the fraud detectable in the first place, whereas electronic voting with no paper record makes election fraud undetectable. What made ballot alteration and ballot substitution possible in Ohio were the breaks in the chain of custody; and what allowed the 2004 election to withstand the initial court challenge was the fact that investigators were not allowed to examine the ballots until 2006.
The preconditions for any crime are motive, means, and opportunity. In case of election fraud, the motive will always be provided by the desire to win the official count, and the means will always be provided by whatever voting method is used. The only way to prevent election fraud is to prevent the opportunity.
In my judgment, based upon three years’ experience auditing a rigged presidential election, the solution is this: paper ballots, counted by hand, in full public view, at the polling place, on Election Night, no matter how long it takes. In this way the counting takes place before any chain of custody questions have arisen, which effectively prevents the opportunity for wholesale election fraud associated with central tabulation. If this seems old-fashioned, so be it. When one is on the wrong path, a step backward is a step in the right direction.
Make them steal elections the old-fashioned way, by altering ballots, destroying ballots, or stuffing the ballot boxes right at the polling places, in precinct after precinct. This requires the collusion of large numbers of poll workers, both Republican and Democrat, and runs the risk of exposure at any polling place where we, the people, are watching.
Originally published January 14, 2007 at OpEdNews [3]
20% is sufficient to ensure integrity of outcomes in almost all federal elections for most election systems, although it would fall short of ensuring election integrity in small close local races. CT's
new procedure is a vast improvement in the sufficiency of manual audits. This is a terrific step forward by Susan Bysiewicz who also had the smarts to select optical scan voting systems (no DREs) for CT!
--------------------------------
http://www.journalinquirer.com/site/news.cfm?newsid=17707463&BRD=985&PAG... [4]
By Keith M. Phaneuf , Journal Inquirer, 01/12/2007
Excerpts from the article below:
HARTFORD - Hoping to make Connecticut a national model for safe elections, Secretary of the State Susan Bysiewicz unveiled a proposal this week calling for mandatory annual audits of one-fifth of all polling places.
"We owe it to the voters to allow them to always feel confident that they have an fair and transparent election process," Bysiewicz said during an interview in her Capitol office.
And while an ongoing debate on Capitol Hill includes a proposed national standard of audits in at least 2 percent of each state'svoting precincts, Bysiewicz says she's looking at a much tougher
standard.
The secretary said Thursday she is submitting a proposal to the state legislature's Government Administration and Elections Committee that would require audits in at least 20 percent of the state's 769 voting precincts, to be selected randomly.
Connecticut conducted a pilot program in 25 communities this fall. Two post-election analyses found no mistakes made by the new machines -which read ballots that voters mark by filling in ovals next to candidates' names.
But unlike the outgoing metal lever machines, they easily allow each ballot to be re-examined, both visually and electronically. Local election officials normally can complete an audit within one day.
"We have the capacity to do it, and I want the taxpayers to know that we've spent money on machines that work," Bysiewicz said. "I think it's very clear we made the right decision as to voting tech, but this would make us a national leader.
AOT 2007
(c)Journal Inquirer 2007
HR 5630 and Auditing Standards
The articles linked from this page are a related body of work on election auditing standards and methods,
by Kathy Dopp of USCountsVotes/National Election Data Archive.
Here is the text of the weakened House Substitute to HR5036:
http://electionarchive.org/ucvInfo/US/legislation/HouseAdminHR5036-MAR28... [5]
Compare to the bill HR5036 as introduced by Holt:
http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=110_cong_bills... [6]
Please tell any House Admin or Technology Cmte members' staffers that
"There cannot be valid election audits without a requirement for the
public release of a report of all the machine-counted vote counts
which might be selected for audit 'before' making the random
selections of such vote counts to manually audit."
All the votes to be audited must be first counted and then
publicly released BEFORE randomly selecting which precincts (or other
audit units) will be audited; and the audit units must be selected
from this publicly released report of vote counts -- not, as in Utah,
where the "audited" vote counts are never publicly released.
See these docs for a description of some sham election audit procedures that the
House Admin Substitute version of HR5036 would allow if it is not fixed:
http://electionarchive.org/ucvAnalysis/CO/Ltr2ColoradoLegislature.pdf [7]
http://electionarchive.org/ucvAnalysis/US/paper-audits/CO/NEDA-Response2... [8]
http://utahcountvotes.org/ltgov/Response2LtGov-Audit-Recount.pdf [9]
A valid election auditing procedure is described here:
http://electionarchive.org/ucvAnalysis/US/paper-audits/legislative/VoteC... [10]
How Big Should an Election Audit Be?
Fixed Rate Audits Do Not Work For Elections
January 17, 2007
This paper presents a simple formula for estimating vote count audit sample sizes to achieve any desired
certainty for ensuring the integrity of election outcomes. The formula described in this paper for
estimating election audit sample sizes, was derived first by Ronald Rivest.1
In particular, this paper briefly shows how to derive an estimate for vote count audit sample sizes:
1. to achieve any desired probability of detecting vote miscount that could alter an election outcome, and
2. to incorporate the principle of maximum vote shift per machine vote count2 that is required for logical consistency when auditing elections
Vote count audit sample sizes must be based on margins between the leading two candidates because the
smaller the margin, the smaller the amount of miscount that could wrongly alter the election outcome,
and the larger the audit sample must be to detect the smaller number of corrupt counts.
In conjunction with this paper, a spreadsheet is available to calculate exact minimum election audit
sample sizes necessary to ensure the integrity of election outcomes. Marian Beddill helped to craft this
new version of an earlier audit calculator spreadsheet created by Dopp.3 This new spreadsheet allows
any person, without a lot of experience, to enter the important factors and obtain a result and see the
efficacy of doing smaller or larger audits. It is available here:
http://ElectionArchive.org/ucvAnalysis/US/paper-audits/HowManyToAudit.xls [13]
Why Assume a Maximum Vote Shift Per Machine Vote Count?
A maximum rate of miscount within any one vote count must be assumed to derive an estimate because
100% of votes cannot be wrongly shifted within each vote count. It would not only be unlikely that
100% of votes are available to target, but it is also unlikely that anyone trying to rig an election would
try to steal 100% of the target votes because it would be immediately noticed.4 However, to avoid
detection, a fraudster would corrupt as few counts as possible. So a maximum rate of vote shift per vote
count is assumed to calculate the amount of corrupt counts that could wrongly alter an election outcome.
The larger the assumed maximum wrongful vote shift rate per machine vote count, the fewer number of
corrupt vote counts could wrongly alter an election outcome; and the larger the audit sample size must
be to detect the corrupt counts.
2 The Brennan Center “The Machinery of Democracy: Protecting Elections in an Electronic World”, June, 2006
3 July 2006 http://electionarchive.org/ucvAnalysis/US/paper-audits/AuditCalculator.xls [15]
4 If vote miscount were made by innocent error, miscount would be more likely to appear in all vote counts and so be
detected with any audit amount. Independent audits should be designed to detect deliberate fraud.
We divide the margin between leading candidates by 2 to obtain the overall rate of votes which could be
shifted to alter an outcome, and then divide by the assumed maximum wrongful vote shift per vote count
to find the minimum percentage of vote counts that must be corrupt to wrongly alter an election
outcome.5
Historical Background
In 1975 Roy Saltman first introduced the concept of the necessity to base election audit sample sizes on
the margins between candidates because the closer the margin between candidates, the smaller the
amount of vote fraud or miscount and the fewer the number of corrupt counts which could wrongly alter
the outcome.
To detect small numbers of corrupt vote counts, larger audit samples are necessary. For example, if the
margin between the two leading candidates is 1%, then only approximately 3 corrupt vote counts out of
100 might put the wrong candidate into office. If the margin is 5%, it might take 15 or more corrupt
vote counts out of 100 to wrongly alter the outcome. A larger manual audit sample size is needed to
uncover one of 3 corrupt counts than to uncover one of 15 corrupt counts.
Unfortunately Saltman’s work in 1975 was substantially ignored at the time until the concepts were
rediscovered by Kathy Dopp in July 2006. Dopp furthered Saltman’s work by applying maximum vote
shift per vote count assumptions and using an estimate based on sampling with replacement6 to create a
trial-and-test spreadsheet method to obtain the exact audit sample size.7
Beginning in July 2006, Dopp and Frank Stenger developed a numerical method to exactly calculate
election audit sample sizes, which they released in September 2006, “The Election Integrity Audit”8.
The Dopp/Stenger method included an optional method for adjusting audit sample sizes for precinct size
variation in case miscounts are targeted to the largest precincts. However, the Dopp/Stenger numerical
method has not caught on yet, perhaps due to the complexity of using a computer program at a time
when the public is demanding transparent, easily-understand verification of election results.
Ronald Rivest of MIT derived a formula that more accurately estimates vote count audit sample sizes9
than the one based on sampling with replacement suggested in the Brennan Center report because it
gives a smaller over-estimate of the exact minimum audit required to ensure the integrity of election
outcomes. For more detailed history and derivation of the formula, see Rivest’s paper.
This paper more simply describes the derivation of the Rivest estimate and shows more explicitly how to
use Rivest’s formula to estimate audit sample sizes for specific margins and assumed maximum vote
shift per vote count.
The minimum audit sample size necessary to ensure the integrity of election outcomes to any desired
level of certainty can be determined by using this small easy-to-use spreadsheet:
http://electionarchive.org/ucvAnalysis/US/paper-audits/HowManyToAudit.xls [16]
5 Vote counts with more than the assumed maximum vote shift relative to prior elections or partisan active voter registration
records in voter history files, must also be included in manual audits as well as randomly selected counts.
6 A sampling with replacement estimate was suggested in the appendices on parallel election day machine testing in the Brennan Center’s June 2006 report “The Machinery of Democracy…”
7 The Brennan Center introduced the idea of a maximum wrongful vote shift per machine in its appendix on sampling voting machines for Election Day random testing of paperless voting machines.
8 http://electionarchive.org/ucvAnalysis/US/paper-audits/ElectionIntegrity... [17]
9 Ibid 1.
My name is Bruce O’Dell, and I am a self-employed information technology consultant based in Minneapolis, Minnesota. I have twenty five years professional experience specializing in the design of very large scale computer systems with extraordinary requirements for security and integrity. For example, while an employee of American Express, I led a project to design a central computer security service to authorize access to financial systems across that company and exchange data and transact on our customers’ behalf, with other financial institutions throughout North America. In 2005 I was the architect in charge of deploying a comprehensive new company-wide security environment at one of the 20 largest public companies in America. I would like to thank the Sub-Committee for the opportunity to share my perspective on electronic voting as someone accountable for the security and integrity of computer systems which safely handle billions - or even trillions - of dollars of other people’s money.
Since the heady days of the 1960s, a new, multi-billion-dollar electronic voting industry with world-wide growth aspirations has emerged. Whether the original drive to automate our voting was driven by genuine desire to improve elections or a simple faith that the latest and greatest technology must necessarily be the best, that industry is now so entrenched it has now become almost impossible to question the original decision to automate voting through application of computer technology.
Problems with computerized voting equipment are well-documented in the computer security community, and began to surface as soon as it was first deployed more than 40 years ago. As early as 1984, as reported in the well-respected “Risks to the Public of the Use of Computer Systems” forum a “series of articles by David Burnham in The New York Times documented vulnerabilities to tampering in equipment sold by Computer Election Systems, then the dominant electronic vendor; elections with their machines were challenged in Indiana, West Virginia, and Maryland, with rigging suspected in the 1984 election in the first two states; Federal Election Commission standards were described as inadequate; Texas also investigated numerous discrepancies involving Business Records Corporation - formerly known as Computer Election Systems; the NSA was asked to investigate if CES systems were open to fraud; California and Florida also investigated; [voting systems examiner] Michael Shamos was quoted as saying CES systems equipment "is a security nightmare open to tampering in a multitude of ways."
Computer Professionals for Social Responsibility, in the fall of 1988, noted: "America’s fundamental democratic institution is ripe for abuse... It is ridiculous for our country to run such a haphazard, easily violated election system. If we are to retain confidence in our election results, we must institute adequate security procedures in computerized vote tallying, and return election control to the citizenry."
In a pattern often to be repeated over the years, little attention was paid to those reports nor to the urgent warnings from independent security experts; while Business Records Corporation prospered and grew rapidly, eventually merging into the company known as Election Systems & Services, currently the leading vendor of computerized election equipment and services.
Yet despite these warnings - which in hindsight seem remarkably prescient - several generations of increasingly complex and expensive computerized voting technology were subsequently developed, marketed and deployed. At the same time, for nearly twenty years, the catalog of reported problems, outages and security vulnerabilities also continued to grow - and recently, accelerated rapidly thanks in part to the “Help America Vote Act” of 2002 (HAVA). Passed in the aftermath of the disputed presidential election in 2000, HAVA was intended to improve the process of voting in America. But as a direct result of its enactment, a new wave of secret and proprietary computerized voting technology has completed the process of computerization of American elections.
With thousands of reported problems nationwide affecting newly-deployed electronic voting equipment in the subsequent elections of 2002, 2004 and 2006, it is clear that HAVA has had precisely the opposite effect to its stated intention. As an information technology professional I am dismayed that all this has been allowed to happen with the blessing and active participation of so many of my colleagues, many of whom make their living promoting e-voting technologies. Billions of dollars have been spent on new voting equipment in the absence of what I would consider adequate disclosure of the true costs and risks to policy makers and the general public. This is a disservice to those who must rely on IT professionals to assess the technologies they do not understand.
As we will see, not only are there fundamental limitations to our ability to prove the accuracy and trustworthiness of any complex real-world computing system, voting itself deserves the strongest degree of protection. Many of my colleagues, as well as their clients and the general public, seem to utterly misunderstand the essential point: computerized voting systems should be classified as national defense systems demanding a much higher standard of protection than more conventional applications.
Undetected widespread covert manipulation of computerized voting systems is the functional equivalent of invasion and occupation by a foreign power. In either case the people lose control of their own destinies, perhaps permanently. Undetected covert manipulation of voting systems could even be worse than mere invasion, since the “electoral coup” would appear to occur with the illusion of the manufactured consent of the governed, and there would be no “tanks in the street” to galvanize resistance.
Voting systems used in American federal elections grant regulatory powers over the world’s largest economy, disbursement authority for the federal procurement budget, control of the composition of the Supreme Court and federal judiciary, and command of the world’s only superpower military. The financial rewards alone for covert influence over the outcome of state elections are potentially very lucrative as well.
Yet despite the fact that our computerized voting systems collectively represent the most irresistible target for insider manipulation in the history of the world, they are not even currently given the same level of protection as systems I’m familiar with in banking and financial services, much less than to computerized gaming equipment in Las Vegas. This is a national scandal, and a disgraceful lapse on the part of my profession.
You may hear from those who believe, to the contrary, that there are powerful information technology industry quality assurance and inspection techniques - such as certification of hardware and software by independent testing laboratories, county-sponsored Logic and Accuracy Testing, or even source code inspection - that can ensure the integrity and accuracy of New Hampshire’s computerized vote tabulation software
Yet, ensuring the integrity of systems is the hardest of all challenges in computing. Once again I believe my profession has failed to adequately inform our clients and the general public.
One of the primary reasons why trustworthy technology is so hard to achieve is that the mind-boggling complexity of real-world systems provides an enormous number of potential points of vulnerability. Voting hardware is deployed at more than 180,000 precincts and in more than three thousand counties in the US -not to forge those of the 309 voting locations in New Hampshire that tabulate votes by machine. The mere physical logistics of moving all that equipment out to the field and getting election results back to the central tabulators for the official canvass is challenging.
Not only are there potentially hundreds of New Hampshire voting devices, there are thousands of individual hardware and software components within each device. This includes proprietary software developed by voting equipment vendors, mass market consumer products like Microsoft Windows, and a host of highly complex, very specialized software - most with no visible behaviors - supplied by a long list of other vendors, many of them offshore.
In addition to all the devices and their individual components, we must also consider the collective actions of the thousands of people who participate, directly or indirectly, in designing, programming, testing, distributing, manufacturing, installing, maintaining, configuring, operating, transporting, monitoring, repairing and storing the vast number of hardware and software components that collectively add up to our system of electronic voting.
You may well hear advocates for rigorous testing and controls to be applied throughout the end-to-end voting process, but the truth is, no amount of testing alone can conjure trust in the overall system.
It is well known in the information technology profession that computers are ultimately "black boxes" - you cannot actually see what bits are really present and executing; and all methods to attempt to do so require other software that itself has the same problem, in an infinite regress. There is no workaround.
The only way to truly know what is running in a computer at any given moment is to observe its behavior: give all possible inputs, measure its corresponding outputs, and then check to see if the inputs and outputs you observe match the specification.
It is reasonable to ask if computer software is always tested before use, why bother to double-check after the fact? Unfortunately, you really have no guarantee that a given computer program's behavior as measured, say, at 10:00 AM will have any relationship to the same program's execution at noon. Computers have clocks and can tell time, and can easily be programmed to behave differently at different times, on different dates – or under an endless variety of different circumstances.
When it comes to systems processing high-value transactions of interest to potential criminal embezzlers - like money or votes - the inherent limitations of point-in-time behavioral testing make it unacceptably risky. Instead, some kind of computer behavioral monitoring system is required to record a vulnerable system's inputs and corresponding outputs while it is processing critical transactions. This would provide all the information needed to enable a human auditor or another automated auditing system to spot processing errors or manipulation of the transactions. But as I will point out, the inherent nature of voting severely limits our ability to monitor the behavior of voting systems.
Independent inspection and certification of source code has no real benefit. If a malicious insider at Diebold or ES&S truly wanted to corrupt vote tabulation logic, they would hardly put it in the official release handed over for review. There’s simply no reason to trust that any software delivered for inspection bears any relationship whatsoever to the logic that actually runs on voting devices in an election.
Since real-world computer systems involve complex inventories of hundreds or even thousands of application program modules, firmware, device drivers and operating system components, static inspection alone will never be able to reliably determine what those components will actually do at any given point in time. There’s simply no reason to believe that a given executable binary file corresponds to the given source code, and no way to truly know what the executable is doing - except by running it. Static inspection is not a security measure.
If source code inspection could allow us to reliably predict how a particular instance of a program will actually work in the field, Microsoft Windows would be a rock-solid, bulletproof product - after all, tens of thousands of programmers spend their professional careers scrutinizing its source code every day. It’s simply absurd for serious IT professionals to state that it would be anything more than a sham to “inspect” whatever source code a vendor supplies. Worse yet, it misleads the public, making it seem as if IT professionals have the power to “know” the source code is benign, and to “know” precisely what it will and won’t do, and to “know” where and how it is actually running in a particular device in the field - when of course, we do not.
Nor can we test security into software. It is a truism in my profession that the purpose of testing is to find “bugs” - not to indicate that a piece of software contains no flaws. It’s a subtle point, but what it really means is that if I’ve found 100 errors, there is simply no magic oracle that will then tell me “well, that’s all, we’re done, no more bugs”.
If it was possible to test quality - much less security - into any piece of software Microsoft Windows would also be the bug-free, highly secure platform we all know it to be, since Microsoft has the world’s most sophisticated automated testing tools, thousands of paid testers, and hundreds of thousands of people worldwide who volunteer to help. Yet even so several critical Microsoft security defects have been reported every month for the last several years. But not to pick on Microsoft; Secunia, a Danish company, maintains an online listing of security issues in popular software; in every case these flaws were discovered after completion of formal testing. The list itself is currently over 700 pages long.
As socially-responsible professionals we must openly acknowledge the inherent limitations of our ability to ensure voting is as trustworthy as a critical national security system should be. We cannot and should not ask the public to simply trust the outcome of any testing and certification process, no matter how many “experts” say so.
I know that some may at this point draw an analogy between computerized banking and computerized voting. For example, Michael Shamos, a noted advocate of computerized voting, and a long-time consultant to states on the certification of their electronic voting systems has stated:
“Why should voting systems be held to a standard of perfection when nothing else in society is? Nonetheless, electronic voting watchdogs insist that election equipment must be perfect or it is totally unusable. The analogy between voting systems and the bank is particularly apt because (1) the chance of a system being tampered with successfully is low; (2) even successful tampering does not necessarily result in the wrong candidate being elected; and (3) only a small portion of the vote is cast on one machine.”
Unfortunately, computerized voting and computerized banking actually have almost nothing in common.
One reason why electronic financial transactions are as secure as they are (by which I only mean that embezzlement is the exception and not the rule) is that while financial transactions are private, they are hardly anonymous; you need to prove your identity to all the other counterparties involved. Each counterparty gets and keeps their own independent records of the transaction, all counterparties are strongly motivated to spot discrepancies and compare their records with others, while procedures relating to resolution of financial disputes are legally mature.
Why are voting systems so different? In contrast with banking, voting is both a private and an anonymous transaction. Applying counterparty-based financial auditing mechanisms to voting transactions as they occur would compromise the confidentiality of the vote and voter.
To meet the standards of banking, not only would multiple independent copies of audit records fully describing the voter’s identity and ballot choices need to be generated and shared with multiple parties, 100% of those transaction records would be routinely audited and the results double-checked by external auditors as well as the voters themselves.
Although some computer scientists feel they can maintain both voter privacy and vote count integrity by some magical all-electronic secret internal audit, ultimately there is no reliable means to do so. At the moment of creating the electronic audit record, the computer could be programmed to electronically assert you input “Smith for Governor" even though you actually input "Jones for Governor". Every such all-electronic auditing scheme, no matter how elaborate, would from that point on then simply record a lie with every appearance of the truth.
The only way voters can protect themselves from such a consistently-told electronic lie is with some kind of corresponding tangible, visible record that can be used as a proof you really voted for Jones. Unlike in banking, we cannot give a voter a receipt or a monthly statement; the best we can do is receive from the voter an anonymous receipt that says the equivalent of "Someone Voted for Jones", and then entrust it to the electoral authorities to count (by hand or machine) and to retain for future auditing or recounting.
In voting, on the other hand, only a relative few states routinely audit their paper ballot records (if they have any) and then in only a few percent of the precincts are any ballots checked at all. Yet if a bank audited only a few percent of its accounts - or none at all unless one of their depositors paid for it themselves - its customers would flee, regulators would shut it down, and under current Sarbanes-Oxley legislation, its Board of Directors would face possible jail time.
To its credit the state of New Hampshire has avoided purchase and deployment of the most risky and problematic class of voting equipment: Direct-Recording Electronic voting equipment (with or without a so-called “voter verified paper audit trail”). Unfortunately it has chosen to continue to rely on Diebold optical scan voting equipment known to be vulnerable to manipulation. Yet by legally enshrining a voter-marked paper ballot, whether tallied by people or by machines, as the definitive record of voter intent, New Hampshire is far better prepared than many other states to ensure the integrity of its democratic processes.
The risks of errors and covert manipulation are inherent to the use of computer software. Human nature being what it is, those risks are ever-present in all systems that process high-value transactions - especially those involving money or voting. So to achieve trustworthiness, independent auditing of an electronic vote count via of an independent should always be performed.
Both the accuracy and integrity of any paper ballot record must also be assured.
To ensure integrity, no one must be able to alter, delete, or substitute paper ballot records after they are verified by the voter and until they are tallied. Immediately after the election, traditional paper-based audit and control concerns take precedence. In general, the more time passes since creation and the further it travels from point of origin, the more risk there is of manipulation or destruction of paper records.
Unfortunately, there is no such thing as perfect security; the best we can do is to mitigate the risks as best we can. In recognition of this inherent problem, the Canadian system of counting paper ballots in-precinct on election night - in concert with their absentee/early voting procedure - is highly secure. The paper flow is always under observation, and ballots are immediately counted in front of multiple adversarial counterparties - namely the political party representatives.
Admittedly, even rigorous paper-handling processes are not perfectly secure - but on the other hand, in the last 600 years of general use of paper records, we have figured out some pretty good procedures. Yet I doubt that many jurisdictions in America handle paper election records with the level of custodial care that we find, say, in handling real estate collateral in the mortgage-backed securities market, much less in Canadian elections.
There are additional practical problems with checking the trustworthiness of an electronic vote tally after the fact. Since paper ballot records are typically not recounted unless margins are very close, brazen theft would be rewarded in practice. No candidate losing by a large margin wants to challenge an election and force a recount. Political culture being what it is in America, such candidates quickly get labeled as "sore losers" who "waste the public's money and the government's time" on pointless recounts, and who use "conspiracy theories" to compensate for their inability to admit they lost.
Although New Hampshire’s experience with recounts appears to show that electronic and paper tallies seldom differ by a significant number of votes, relatively few “top ticket” races have been recounted - presumably the rewards of altering the outcome of major state or federal offices are more likely to outweigh the risk of discovery.
When statewide recounts of paper ballot records for high-stakes races occur, recent experiences in Ohio and Washington state clearly reveal the potential for flaws in both approach and execution in conventional recount and spot audit protocols.
I personally believe that New Hampshire is better served by enhancing its hand-counted paper ballot protocols, to retain full citizen control and oversight of the electoral process. On the other hand, as long as optical scan tabulation is performed (especially on equipment known to be vulnerable to covert manipulation), counting some of the ballots by hand and comparing to the electronic tally can identify accidental or deliberate mistabulation of the vote. The details of the independent hand count protocol determine the probability of detection.
There are two general approaches for hand count validation of electronic vote tabulation: precinct random spot audits and universal ballot sampling. Several states currently rely on precinct random spot audits; for example, California counts 1% of its precincts by hand, and Minnesota performs a random post-election hand-count audit of 2 precincts per county (amounting to somewhat more than 4% of the total number of precincts). Due to differences between the human and the electronic and mechanical interpretation of voter intent, small discrepancies are not necessarily a sign of systematic mistabulation - although there are credible exploits in close elections where outcome-altering results can be determined by just a few votes per precinct. Typically there is a formal or informal standard for expanding the hand-count validation if significant discrepancies are detected; in Minnesota the standard for expanding the audit is a 0.5% discrepancy between the hand and machine tally.
There are several potential drawbacks with conventional precinct spot-audit protocols. (1) There are classic concerns about chain of custody which are proportional to the time which passes between casting the ballot and performing the hand count validation. Ideally, the spot audit would occur in precinct on election night. (2) The recent conviction and sentencing of election officials in Ohio who “gamed” the selection of precincts for the Ohio partial recount to ensure that no discrepancies would be detected illustrates the difficulty of ensuring true random selection is followed. (3) If hand count validation occurs in only a few percent of precincts and mistabulation is clustered, the laws of statistics tell us that there can still remain a significant chance that the mistabulation is not detected. (4) Clustered mistabulation may be detected, but the magnitude of the discrepancy may be too small to expand the audit further. Political pressures may be placed on a candidate such that even if a suspicious pattern of discrepancies is detected - but it appears to be insufficient to change the outcome - it would not be practical to continue to contest the result and expand the audit. (Candidates do not wish to be labeled a “sore loser” - those who do may find their career in peril.)
The Election Defense Alliance has created and published the results of computer simulations of a variety of precinct spot-audit protocols - such as the ones proposed in Washington DC in 2006 as HR 550, and this year, as HR 811. Our findings indicate that especially in the case of the US House of Representatives (involving on average about 440 precincts, nationwide), there is an unacceptably high rate of failure to detect outcome altering mistabulation in many credible scenarios as modeled.
The alternative hand-count election verification protocol involves a somewhat counter-intuitive approach: hand-counting a few percent of the vote in 100% of the precincts, rather than hand-counting 100% of the vote in a few percent of the precincts.
This protocol - which Election Defense Alliance calls UBS, or “Universal Ballot Sampling” - randomly selects a sample of individual ballots from every precinct voting location, and hand-counts just those ballots. The rationale for doing so is that this is an analogy to a “public opinion poll”, in that it randomly samples ballots for hand-counting in much the same way that an opinion poll randomly samples a population. If enough ballots are sampled and hand-counted, the accuracy of that sample can be estimated to a high degree of precision - just as the margin of error of a random public opinion poll can be estimated to a high of precision. It turns out that randomly sampling approximately 15,000 - 20,000 votes in any contest should produce a sample that reflects the outcome of the election as a whole within plus or minus 1%, with 99% certainty.
Since most US House races generate 150,000 - 200,000 votes, simply randomly sampling every tenth ballot in a precinct should ensure that when the precinct hand count sample results are rolled up, the votes for US House candidates in the sample match the votes in the electorate as a whole within plus or minus 1% with high confidence.
Election Defense Alliance has created computer simulations of the UBS protocol and empirically verified that, if the precinct ballot sample is random, indeed UBS did detect 100% of simulated mistabulations > 1% of the vote.
This addresses several problems with the alternative, precinct spot-audit approach. If the UBS and the optical scan tally are within 1% with the sample sizes indicated, there should be high confidence that there was no significant machine mistabulation. The false-positive rate should be very low.
On the other hand, if the difference between the UBS result and the optical scan tally is greater than 1%, there is a strong and objective mathematical case for a candidate to challenge the official tally and request an expanded hand (re)count. Since the UBS results are available as soon as the optical scan tally is available, a candidate is also empowered to challenge suspect results before the “official” tally becomes fixed in the minds of the voting public and their political peers.
We have identified a number of ways to ensure that the sample of ballots selected for UBS handcount is random. It is also important to make sure that absentee ballots are pooled with in-precinct ballots, and that both are sampled randomly. Once again the election practices in New Hampshire seem well-suited to a UBS-style protocol, since early voting (which introduces additional chain of custody risk) is not allowed, and absentee ballots are counted in-precinct on election night, and the pool of people familiar with efficient hand-count procedures is large.
Returning to the question posed earlier: the fundamental question - why should machines tally our votes in secret - remains unanswered. Other than for the obvious financial benefit of the vendors, why should voting be a transaction tallied in secret by machines, rather than a civic transaction performed by people in public view?
In fact, there is a fascinating study from 2001 (interestingly enough, published shortly before HAVA was enacted) which concluded that not only were hand-counted paper ballots the most accurate of all vote counting methods, measuring by residual vote rate, but that every single technological “innovation” of the last century - lever machines, punch cards, optical scan, DRE - actually measurably decreased the accuracy of the voting process. Their conclusion:
These results are a stark warning of how difficult it is to implement new voting technologies. People worked hard to develop these new technologies. Election officials carefully evaluated the systems, with increasing attentiveness over the last decade. The result: our best efforts applying computer technology have decreased the accuracy of elections, to the point where the true outcomes of many races are unknowable.
There is an entire industry which is predicated on the belief that computers are better than people when it comes to counting votes, yet the precise nature of the problem that electronic voting was intended to solve remains unclear. The balance of evidence indicates that while voting by computer may well be wide open to insider manipulation, and in practice has been plagued by glitches and inaccuracies, at least it’s more expensive than the alternatives. Even when legal paper ballots are tabulated on optical scanners, the effort required to put in place a statistically-valid hand-check of the machine tallies does tend to undermine the rationale for automation in the first place.
In the final analysis, I believe computer automation of voting will be regarded by future historians as one of the greatest blunders in the history of technology. Our choice now is to determine at what price - both in money and public good will - that realization will finally strike home. In the meantime, states like New Hampshire can take action to engage its citizens in safeguarding its democratic processes, though effective hand-count validation of optical scan vote counts.
Links:
[1] http://www.electiondefensealliance.org/auditing/what_constitutes_election_audit
[2] http://www.witnesstoacrime.com
[3] http://www.opednews.com/articles/genera_kathy_do_070114_ct_3a_national_model_f.htm
[4] http://www.journalinquirer.com/site/news.cfm?newsid=17707463&BRD=985&PAG=461&dept_id=161556&rfi=6
[5] http://electionarchive.org/ucvInfo/US/legislation/HouseAdminHR5036-MAR28.pdf
[6] http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=110_cong_bills&docid=f:h5036ih.txt.pdf
[7] http://electionarchive.org/ucvAnalysis/CO/Ltr2ColoradoLegislature.pdf
[8] http://electionarchive.org/ucvAnalysis/US/paper-audits/CO/NEDA-Response2CO-AuditingRules.pdf
[9] http://utahcountvotes.org/ltgov/Response2LtGov-Audit-Recount.pdf
[10] http://electionarchive.org/ucvAnalysis/US/paper-audits/legislative/VoteCountAuditBillRequest.pdf
[11] http://electionarchive.org/ucvAnalysis/US/paper-audits/ElectinonAuditEstimator.doc
[12] mailto:[email protected]
[13] http://ElectionArchive.org/ucvAnalysis/US/paper-audits/HowManyToAudit.xls
[14] http://theory.csail.mit.edu/~rivest/Rivest-OnEstimatingTheSizeOfAStatisticalAudit.pdf
[15] http://electionarchive.org/ucvAnalysis/US/paper-audits/AuditCalculator.xls
[16] http://electionarchive.org/ucvAnalysis/US/paper-audits/HowManyToAudit.xls
[17] http://electionarchive.org/ucvAnalysis/US/paper-audits/ElectionIntegrityAudit.pdf