The script has become so familiar that even the schadenfreude has started to wear thin.
In the days leading up to an election, a polling company pushes back hard against anyone who would dare question its latest numbers or the methodology behind them. Then the votes come in, the result is not as predicted, and combative self-assurance is replaced by soul-searching contrition. We sure screwed up this time, so we'll go back to the drawing board to make sure it doesn't happen again!
The latest such episode, in Calgary's mayoral contest, was so extreme as to blow a little life into this tired trope. The pollster most prominently in the field, Mainstreet Research, very confidently gave challenger Bill Smith a double-digit lead over incumbent Naheed Nenshi – causing much confusion among voters given that another poll, with data from Forum Research analyzed and released by academics who had commissioned it, showed Mr. Nenshi 17 points ahead. Both proved off, but Mainstreet especially, as Mr. Nenshi won by just eight points. Cue the mea culpa, with Mainstreet's Quito Maggi acknowledging that of all the "awful" polls, his was the worst, and vowing to "look at our entire process and figure out why."
Anyone who follows such things closely knows it's highly unlikely his company or any other will perfect their methods to ensure this never happens again. There will be campaigns in which most research companies get it right; there will be others in which some are right and some wrong; there will probably be another in which most are left with egg on their faces.
And that might be okay, because polling is an imperfect science. For reasons that tend to be enumerated in post-election think pieces – the decline of home phone lines, disproportionate response rates from different demographics, challenges in predicting who will turn out to vote – it's more difficult than ever. There's no inherent shame in landing on the wrong numbers, as researchers experiment with how to get them right.
Or at least there wouldn't be any shame if all the companies in the field were willing to show humility before results were in – to avoid, say, promising to "single out" discredited critics after their predictions are proven correct, as a Mainstreet spokesperson did apropos Calgary.
Not all pollsters are that brazen, and some have better records than others. But even the most restrained pollsters are usually loath to admit much doubt in their findings – to respond to discrepancies between their polls and others', for instance, by acknowledging they're as interested as anyone else to see who got it right.
Their disinclination to do so has something to do with the hyper-competitive culture of their profession and the strange rivalries that spring up between different pollsters taking swipes at each other on social media. And then there are, somewhat relatedly, commercial considerations at play.
Political polling is not very lucrative in itself. It's often a loss leader, done for publicity to attract corporate clients. Some pollsters try to set themselves apart with the sophistication of their work, as they explore the electorate's underlying trends. But for those hoping to win customers on the basis of landing on the right horse-race numbers, there's an obvious incentive to set that up by drawing attention to their confidence in advance – and a downside to equivocation.
There should also be a downside to expressing false confidence, but to date it's mostly voters who have had to face it, as they're perpetually misled (intentionally or not) about the reliability of the information they're receiving.
It's around this point in the script when calls start being made to ignore polls outright the next time around – if not by trampling over freedoms by banning them then by ignoring them.
But they've proven too addictive for that, and it's not as though they're without potential value. Voters can reasonably hope to look to them more as a guide, particularly when relative competitiveness matters in races with more than two candidates. And there's real worth in the polls that attempt to understand why the electorate moves the way it does.
The more realistic approach, for media and consumers and pollsters alike, is just to admit what we don't know – which, at this point, is the veracity of any one horse-race poll until the votes have been counted. Maybe that'll change if the science improves. For now, a little humility would go a long way.