Any comparative forensic analysis is only as “good” as its baselines. In Landslide Denied[1]—our archetypal post-election comparative forensics study, in which the “red shift” (the rightward disparity between exit poll and votecount results) was identified and measured—a critical component of the analysis was to establish that the exit poll respondents accurately represented the electorate. We employed a meta-analysis of multiple measures of the demographics and political leanings of the electorate to demonstrate that the exit polls in question had not “oversampled” or over-represented Democratic or left-leaning voters (in fact any inaccuracy turned out to be in the opposite direction), and therefore that those polls constituted a valid baseline against which to measure the red-shifted votecounts. In Fingerprints Of Election Theft,[2] we went further and removed all issues of sample bias from the equation by conducting a separate poll in which we asked the same set of respondents how they had voted in at least one competitive and one noncompetitive contest on their ballot. The noncompetitive contests, being presumptively unsuitable targets for rigging, thus served as the baselines for the competitive contests, and the relative disparities could be compared without concern about any net partisan tendencies of the respondent group.
More recently we have commented on the feedback loop that develops between election results and polling/sampling methodologies, such that consistently and unidirectionally shifted votecounts trigger, in both pre-election and exit polls, methodological adaptations that mirror those shifts.[3] Approaching E2014, we observed that the near-universal use of the Likely Voter Cutoff Model (LVCM) in pre-election polling, and stratification to demographic and partisanship quanta derived from (rightward) adjusted prior-election exit polls in all polling, were methodological distortions that pushed both exit polls and pre-election polls significantly to the right, corroding our baselines and making forensic analysis much less likely to detect rightward shifts in the votecounts.
Indeed, given the rightward distortions of the adaptive polling methodologies, we noted that accurate polls in E2014 would serve as a red-flag signal of rightward manipulation of the votecounts. In effect, the LVCM and the adjusted-exit-poll-derived weightings constituted a rightward “pre-adjustment” of the polls, such that any rightward votecount manipulations of comparable magnitude would be “covered.”
It is against this backdrop that we present the E2014 polling and votecount data, recognizing that the adaptive polling methodologies which right-skewed our baselines would combine to reduce the magnitude of any red shift we measured and significantly mitigate the footprint of votecount manipulation in this election.
The tables that follow compare polling and votecount results, where polling data was available, for US Senate, gubernatorial, and US House elections. The exit polling numbers represent the first publicly posted values, prior to completion of the “adjustment” process, in the course of which the poll results are forced to congruity with the votecounts.[4] The “red shift” represents the disparity between the votecount and exit poll margins. For this purpose, a margin is positive when the Democratic candidate’s total exceeds that of the Republican candidate. To calculate the red shift we subtract the votecount margin from the exit poll margin, so a positive red shift number represents a “red,” or rightward, shift between the exit poll and votecount results.