People’s Pundit Daily, the most accurate 2014 election projection model on the Internet, released the first round of grades for the PPD Pollster Scorecard. Before we get into the ratings for the first few pollsters, we need to lay some groundwork for the article.
First and foremost, despite what may appear to be complicated, well-honed methodologies that can only be understood by those gifted in statistics or psephology, polling is more an art than a science. With the exception of a few, we consistently find that veteran pollsters with a niche–i.e. polling firms with long track records in a particular state or set of states–perform measurably better due to that experience. That said, we feel it is incumbent to express our deep dissatisfaction with the polling industry, as a whole.
While we shouldn’t be inclined to pounce on pollsters that get it wrong from time-to-time, we should also demand an explanation when one is necessary. To be sure, much of the polling during the 2014 midterm election cycle was abysmal, and downright indefensible. We find the utter lack of explanation and transparency post Election Day highly suspect, and are in agreement that the American voters deserved better than the widespread failure to offer meaningful insight into the state of the races. In fact, this is one of two reasons we have decided to drip-by-drip release the PPD Pollster Scorecard.
We like to think of the PPD Election Projection Model as a hybrid, but it is arguably more a big picture fundamentals model than anything else. Still, it is undeniable that pollsters–along with a horserace-loving media–have the power to influence. And though it may come as a surprise to some readers, it is often used for such unethical motivations. It’s time to hold them accountable considering they seem to think they do not owe the American people an explanation, and we find other prior attempts to do so insufficient. The second reason for releasing the grades on the PPD Pollster Scorecard, of course, should be obvious–transparency.
About the PPD Pollster Scorecard
Pollsters are generally assigned two grades on the PPD Pollster Scorecard, one an overall grade and another grade based strictly on raw performance. In the past, grading the polling firm on transparency was pretty much done by researching whether they are a member of the National Council of Public Polls (NCPP), a signatory to the American Association for Public Opinion Research (AAPOR) Transparency Initiative, or a contributor to the data archive via the Roper Center. That is par for the course, as you might have seen with other pollster raters.
However, considering the less-than stellar performance by many of these pollsters in 2014, we now consider whether they answer our inquiries directly, provide us with detailed data upon request, consistently post more bias results one way or the other depending on their sponsors, etc.
The overall grade considers a pollster’s responsiveness, transparency, methodologies, as well as their ability to get in front of trends (or movement) that other pollsters either miss or end up being slower to the roll. The overall rating also factors in whether we believe a pollster is up to something unethical. That is to say, without beating around the bush, we think they’re political hacks and completely full of it.
There are obvious varying degrees of this behavior. Generally speaking, it can range from suspected torepeated concern to blatant ethical misconduct. The less severe end of the spectrum typically means the polling results are penalized for being suspect, while the other end means they are no longer worthy of consideration, at all.
If we suspect ethical misconduct, it will have a significant negative impact on that particular pollster’s overall grade on the PPD Pollster Scorecard. If we come to such a determination, which is somewhat rare, we will dub that pollster a “political hack,” write about it for everyone to read and discount their polling, altogether.
How might we consider weighing something so serious and potentially damning?
If the PPD average on our ObamaCare Approval Rating Index is -10, and a pollster coincidentally releases a poll a few days before the Supreme Court hears or decides a landmark case on the president’s signature health care law showing Approve +whatever, they might be up to something unethical.
If a candidate says something utterly stupid about women’s bodies naturally aborting pregnancies as a result of rape and virtually every poll confirms a severely damning impact on that candidate’s support, yet a pollster consistently shows him or her still ahead because they may want them to get a particular party’s nomination, they might be up to something unethical.
Predictive Value Grade
The second grade, as previously stated, is strictly assigned based on predictive value. That is, did that pollster accurately predict the winner/outcome of the election and did the results come within the margin of error, typically 3 to 3.5 percentage points or less? Worth noting, we do weigh pollsters when determining the status of races analyzed on the PPD Election Projection Model. Also, we will be releasing more on the specific data and research associated with each polling firm, as well as other firms, shortly. Now, without further ado, here are the ratings with an expanded explanation on each.
Let’s start with Seltzer & Company, quite frankly the only pollster currently enjoying two A+ grades on the PPD Pollster Scorecard. J. Ann Seltzer runs the firm that is the industry gold standard, plain and simple. For those who do not know, the firm is based out of West Des Moines, Iowa and pretty much sticks to what and who they know best–Iowa and Iowans. In 2008, Seltzer was the first to catch the surge from then-Sen. Barack Obama during the Democratic nomination and ended up nearly nailing his 7.8% margin over Hillary Clinton to the tee.
“If I was a gambling man and Ms. Seltzer’s poll was 20 points off the average spread, I would bet it all on them,” says PPD’s senior political analyst Rich Baris. “Her firm isn’t afraid to release bold polling results that challenge so-called conventional wisdom. They’re transparent, they’re polling practices are solid, proven, and they get results that match reality. ”
Fast-forward to 2014, and they again showed the rest of the industry they aren’t afraid to publish results that may not comport with the average. The final Des Moines Register Poll showed now-Sen. Joni Ernst, R-Iowa, defeating Rep. Bruce Braley, D-Iowa, by 7 points. She beat him by 8.5%, putting Seltzer & Company easily within the margin of error and far more accurate than the average 1.8-point nail-biter spread. Including the 2014 Iowa Senate race, Seltzer & Company has an almost immeasurable and certainly statistically insignificant Democratic slant.
The Quinnipiac University Poll, sometimes referred to as the Q-Poll, is similarly worthy of praise for consistently producing reliable polls with significant predictive value. Based out of Connecticut, the pollster conducts minimal national polling, presidential swing-state polling and a handful of other states in which they have a long and proven track record.
“Cherielas Schwartz and Tim Malloy run a top shelf polling operation,” Baris says. “When political pundits were shading now-red states with Democrat incumbents blue, the Q-Poll was firing warning shots nobody heard. Their proven results in 2014 only added to their already solid record of predictive results and transparency. Quinnipiac is everything a pundit wants in a pollster.”
Quinnipiac, with an overall A grade on the PPD Pollster Scorecard, can be counted on to ask probing, in-depth questions to respondents that allow us to gain a greater understanding of the electorate’s mood. In bothColorado and Iowa, for instance, early Q-Poll results–which actually showed Democratic incumbents leading–also tipped off the PPD Election Projection Model to vulnerabilities that other forecasters either ignored or completely missed. Q’s slight relative slant to the GOP is under 1 point and also statistically insignificant.
Relative newcomer Gravis Marketing, the only robocalling firm on today’s PPD Pollster Scorecard with an overall A- grade, came on the national polling scene in 2012. The Florida-based pollster quickly proved their home state advantage when they bucked the polling trend and published final polling results surprisingly favorable to President Obama. Gravis outperformed 3 of 4 of the final surveys in the Sunshine State, which showed Romney with a 1- to 6-point lead.
“Gravis and other pollsters who robocall respondents have faced considerable skepticism from pundits and other pollsters,” Baris says. “But they have undoubtedly proven their critics wrong, including me. Cherie Bereta Hymel and Co. are transparent, responsive and able to boast more accurate polling results in pivotal races than more-often cited, so-called reliable firms.”
Baris says he has been keeping a particularly close eye on Gravis Polls, which correctly called the North Carolina Senate race between incumbent Sen. Kay Hagan, D-N.C., and now-Sen. Thom Tillis, R-N.C., in their final survey of the 2014 contest. While the Fox Poll, CNN/Opinion Research Poll, and YouGov all gave Hagan the edge in their final surveys, Gravis understated Tillis’ support by just 0.7%.
“Bereta Hymel’s firm caught what turned out to be real, last-minute shift toward Tillis when no other pollster even came close,” Baris added. “In Arkansas, other competing election models were still favoring incumbent Sen. Mark Pryor over now-Sen. Tom Cotton when PPD pushed back and argued that was wholly unrealistic. Gravis was not only inline with that correct assessment but was also the first to show Cotton breaking the ever-important 50% threshold. More recently, it was slow-going over the summer as far as catching on to Donald Trump’s surge in the crowded Republican field. Not for Gravis.”
With a slight relative slant toward Republican candidates coming in shy of 1.5 points, Gravis enjoys a B predictive value grade on the PPD Pollster Scorecard. Unfortunately, this is where the praise ends and the criticism begins. In the next article, we will explain why Pew Research and Public Policy Polling have a less admirable record.