Saturday, November 21, 2009

NY CD-23: Questions Remain About "Pilot" Federal Election

We have written previously about New York's reckless rollout of uncertified voting systems in real elections. Letters from good-government groups to the US Dept. of Justice, the NY State Attorney General and the NY State Board of Elections have also expressed these concerns. The only response from these officials has been to stay the course.

While experts tell us that testing and certification of computerized voting systems is never a guarantee that the outcome of an election will be correct, there are lingering questions about the uncertified machines used in New York's 2009 CD-23 special election, including the procedures purported to reduce the chances of wrongly declared winners of elections.

An article quoting an election official who claimed there was a "virus" in the voting system has been criticized for the misuse of this term. While technically, the critics may be correct -- a bug discovered in the software is not necessarily a virus -- critics also seem unaware of the history of the machines in question.

That history involves the use of uncertified software distributed by Sequoia Voting Systems to third party vendors such as ballot-printing shops. This so-called "Bridge Tool" program allows such vendors, rather than bi-partisan county election officials, to configure uncertified voting machines (or for that matter, certified ones).

Last year -- and this year in counties that did not participate in the pilot -- the Bridge-Tool method was used to provide accessible voting by outsourcing the configuration of ballot marking devices (BMDs) used by voters with special needs. While a small number of accessible paper ballots could easily be hand-counted to ensure that these votes are counted as cast, there is a broader problem: the BMDs are part of the same optical scan systems being used to count thousands of votes in CD-23 this year -- the Sequoia/Dominion ImageCast ballot scanners.

Today we have received reports that BMDs configured using the Bridge Tool, incorrectly printed what was supposed to be a two-sided ballot. In one county, out of 58 ballots printed, one ballot had the races on the front duplicated on the back, omitting the proposals that should have been on the back. On another ballot, only the front side printed, again omitting proposals on the back. On another ballot, both the front and back of the ballot printed, but the races from the front were reprinted over the proposals on the back. Because this small number of ballots were counted by hand, these errors were detected. But consider how such problems might manifest themselves if these ballots, or thousands of pre-printed ballots, were scanned and counted by the same unreliable machines -- the Sequoia/Dominion ImageCast ballot scanners.

No one knows exactly what data was transferred to the scanners using the Bridge Tool and removable memory cards last year, or if some of that data remains on the memory cards to this day. New York has no procedure to independently inspect the contents of scanner memory cards, a service performed by the University of Connecticut at the request of their Secretary of the State. New York has not adopted the rigorous procedures reportedly followed in the State of California to attempt to ensure that its voting machines and Election Management System PCs used to configure them are free of malware.

So the following unanswered questions remain about the electronic vote counts and the voting system used in CD-23 and elsewhere in New York:

Election Security Concerns

1. Was the uncertified Sequoia Bridge Tool program used by any third-party vendors to program any CD-23 ImageCast machines in past elections or the current one? If so, what was the method by which the ballot definition files were transferred to the ImageCast machines and how do we know this did not deliver malware to the scanners? (It's been claimed that because the scanners are Linux machines, it's unlikely that a "wild virus" was introduced. But this does not rule out malicious configuration files. Also, note that the Election Management System PCs that configure the Linux scanners run Microsoft Windows -- NOT Linux.)

2. Are there any internal USB ports in the ImageCast scanners, besides the one the SBoE says is used only for the printer?

Election Integrity Concerns

1. If the problem was caught by a pre-election logic and accuracy test as claimed by the State Board of Elections, then why wasn't the problem caught on every machine where it existed? The SBoE has said that not all machines with multi-winner races were identified, but all machines were supposed to be tested. This means that the tests may not have been run as required; or the tests may have failed to detect the problem in all cases; or the test results may have been ignored. (These are not mutually exclusive.)

2. Why did it take so long for the reported bug to be discovered? New York is supposed to have the most rigorous certification process in the nation -- yet these machines can't even support a simple "Vote-for-2", "Vote-for-3", etc. contest on the ballot. They crash.

3. Were all the relevant election officials informed about the discovery of the problem? If so, when?

4. Why wasn't this problem widely publicized before the election so that voters and candidates -- and not just election officials, vendors and other insiders -- could have known about it?

5. What exactly was changed in the ballot programming (which is not the source code), to serve as as a workaround for a reported bug in the source code? How was this done without preventing voters from voting for as many candidates as they were entitled to vote for (a violation of NY's Election Law and Constitution), or allowing voters to overvote without notifying them (a violation of State and Federal Law (HAVA))?

6. Were all emergency ballots counted at the polls on election night, or were they removed from public view and counted later?

7. Will there be a full hand count; a hand count of all the ballots cast on the machines that had the problems; a 3% hand count; or some other hand count based on the grossly inadequate Part 6210.18 audit Regulations?

8. If not a 100% hand count, will all the machines selected to be hand counted be chosen randomly with respect to the entire set of machines that counted the CD-23 race in each county, or will the machines be chosen because they are needed to audit other contests as provided for in the 6210.18 Regulations? (These regulations are written so as to require a great deal of non-random selections of machines with respect to an individual contest. This not only makes a lot of busy-work for the counties, but undermines the effectiveness of the random audit.)

Eddie Ajamian contributed to this article.

For more coverage of this story see The Brad Blog,
Bo Lipari's blog item and Teresa Hommel's response [PDF].

No comments: