For 30 years, I’ve designed and analyzed public opinion surveys. I’ve done campaign polling for leadership candidates, local candidates, national political parties and media organizations too.
I’ve always loved finding out how people consider their political choices. What I’ve learned has fascinated me. But the things I don’t yet know are even more interesting. To me, excellence in this profession is more about eternal curiosity, less about being convinced that you can predict tomorrow based on what you know about yesterday.
Lately, some in the polling industry have been indulging in an unhealthy, feverish competition to predict the outcome and seat distribution of every election. I think it’s a bit of a fool’s errand.
I’m personally enjoying the fact that the race for Ontario is down to the wire and the outcome is more uncertain than ever.
It’s a great time to remind ourselves that the suspense of a big unknown is more interesting than endless over-confident predictions about the chemistry of turnout rates and the implications of same for a handful of swing ridings.
It would be refreshing to hear more pollsters say, honestly, “impossible to tell” when asked how this election will turn out. “I don’t know” isn’t a mark of shame, and sometimes it can be a badge of honour.
Pollsters aren’t completely to blame for the syndrome that’s developed. Nor are news organizations. But together, the relationship has been mutually destructive. The search for edgy stories and declarative headlines creates a not-very-subtle invitation to pollsters to put more black and white into their storytelling, despite the fact that often, shades of grey are the hallmark of Canadian political opinion.
This is not to say that little has been learned from the polls that have emerged in this election. But the best value lies in the big picture, the context and the general reactions to parties, leaders and ideas.
Reducing the value of polls to a test of whether they predicted voting choice among several parties to within a couple of percentage points, is to overlook plenty of the bigger picture information that is both accurate and relevant.
We know, for example, that the economy is far from the only issue that matters, and that feelings of public trust having been betrayed by the McGuinty government are a big influence on the mood of voters.
We’ve learned that there is widespread hesitation about the idea of Tim Hudak as premier, because of concerns that his economic prescriptions might backfire, that his fiscal priority will erode education or health services, or that his conservative ideology runs too deep for a province of progressives and pragmatists.
We’ve seen evidence that the fatigue and frustration with the Liberals is combined with a fragile hope that Kathleen Wynne can represent a fresh, centrist, more ethical and fiscally prudent start. Many voters want to believe, but fear being let down again.
We see numbers that explain the big challenge for the NDP and Andrea Horwath – doubts about their credibility as economic managers among those they need to win over, combined with fears among some of their base that an NDP vote could lead to a second Common Sense revolution.
Polling has amply and credibly described the outlines of this campaign and the feelings that voters have about these choices.
That it can’t tell us exactly what will happen when voters enter the polling stations on Thursday is not a failure, it’s a useful reminder that the chemistry of elections is complex, susceptible to change and unpredictable.
On Friday morning, I’m sure we’ll see a replay of the public crowing by pollsters whose numbers were closest to the mark and the crow-eating of those who were furthest afield of the outcome.
At that moment, there’ll be little room for the truth that those in the polling business know full well: for most of the better firms in the business, being closest is more about luck than science. And with the long term decline in response rates, that level of precision is getting even more elusive, no matter how advanced the data collection technologies are, or how sophisticated the “weighting” procedures.
But lest this column leave the impression that I think all polls are equally competently done, that’s certainly not the case.
Recently, we’ve seen a startling tendency on the part of some news organizations to publish numbers with too little diligence about the methods used to gather them, and too little care about creating false impressions of what’s really happening.
Wild swings are reported without any pause to wonder if they are really plausible. Minute changes are described as evidence of something fundamental and newsworthy having happened, and then a couple of weeks later the “shift” reverses itself, with no acknowledgement that the original analysis might have been completely over-stated.
Wishing that polls would go away and leave voters alone is a more and more common refrain in our democracy – and there are some pretty good reasons why that has become the case. But the desire to know what others think is insatiable, and so the demand will not disappear. Our best hope is to strive to use polling data more prudently – which includes occasionally relishing, not despairing about, the things that are truly impossible to predict.