Skip to main content

With every election – and increasingly often between elections – new polls come out frequently, purporting to show what percentage of the population say they support which candidates. Polls can be a useful tool to gauge how politicians and issues are faring in the public eye, but they aren't without their downsides. Can you trust poll numbers? And when and how should you take results with a grain of salt?

Here's a guide to understand how polls are done and how to interpret their results.

The Globe and Mail

Why do we poll people about their political preferences?

Polls serve a purpose. They allow the public to ‘speak truth to power’, putting the lie to political spin and keeping our political leaders accountable. Polls can be useful pieces of information that might help voters decide how to cast their ballots. As part of the process of deciding how to vote in an election, we canvas our friends and family, we listen to talk radio, or we read opinion pieces in the newspaper to get an idea of how a race is shaping up. Polls are no different – just more scientific.

As well, people are interested. Few political stories get as much traction as the results of the latest poll. People like to know the state of a political race and who might win, like spoilers for an upcoming comic book movie.

Newsrooms, advocacy groups, and political parties commission pollsters to conduct surveys for them, with the cost ranging from a few thousand to a few tens of thousands of dollars, depending on the type of poll.

Increasingly, though, polling companies conduct polls and hand them to the media for free, or publish them directly on their websites. This is done to promote the company’s services, as pollsters generally make the vast majority of their revenues from market research, rather than political polls.

istockphoto.com

What are respondents asked when they’re polled?

That depends on the survey. If it is a poll about voting intentions, the first question to be asked is usually whether you are eligible to vote. There’s no reason to poll people who can’t cast a ballot.

A properly constructed poll will often get right to the most important questions in order to prevent a bias from seeping in. If an election were held today, which party would you vote for? Some polls allow you to say the name of the party yourself, but most provide a list from which to choose, with the order of the parties being randomized with each call.

Other questions might revolve around approval ratings or specific political topics of interest. A poll then usually ends with demographic questions related to age, gender, income, and education. This helps calibrate the poll to get the sample to reflect the general population.

How do pollsters reach respondents?

There are three methods that are used by most pollsters today.

One is the oldest method still in use: via the telephone. Numbers are randomly dialed within a given area code, and a live interviewer is on the other end of the line to conduct the poll. These interviewers follow a precise script to ensure that every call is conducted in the same manner. If your number was dialed but you weren’t home, a proper poll will try several times to reach you over the next hours or days. Otherwise, the sample might be skewed by interviewing only people who were at home at 7 p.m. on a Tuesday night. Maybe these people are different than those who are home on Wednesdays.

Another method also uses the telephone, but instead of a live interviewer the call is automated. These are known as ‘robo-polls’. A recorded script is played, and respondents are asked to punch in their responses using the telephone keypad. Press 1 for Conservative, press 2 for Liberal, etc. The advantage of this method is that many calls can be conducted quickly and cheaply – you don’t have to train and pay a team of interviewers to get the results.

The last method that is becoming increasingly ubiquitous is via the internet. Polling firms assemble a panel of internet users, often numbering in the hundreds of thousands. These panelists can be recruited in various ways, including through internet advertisements and over the telephone.

Once the panel is built, pollsters then survey among the members of that panel, ensuring that those who complete the survey are broadly reflective of the target population.

photos.com

What error is there in the numbers?

There are a number of sources of error, the most important one being sampling error. No matter how perfectly a poll is conducted, there will always be a degree of error associated when sampling a small portion of a large population. This is reported as the ‘margin of error’, and in a standard poll of 1,000 people that margin of error is plus or minus 3.1 per cent, 19 times out of 20. That means that if the poll had been conducted in exactly the same way 20 times, in 19 of those cases the results would be within 3.1 points of actual public opinion.

This assumes, however, that the sample was drawn randomly and that everyone in the target population has an equal chance of being interviewed. This is why telephone polling can still carry a margin of error – virtually everyone has a telephone, be it a landline or mobile phone (and yes, most pollsters do sample cell phones). But not everyone will respond. Response rates have dropped to 10 per cent or less, from roughly 1-in-3 in the past. This might have an important effect on the accuracy of a poll, though there is some debate in the industry about whether or not this effect is significant.

Internet polls, as they survey a subset of the population that is a member of a panel, are not supposed to carry margins of error, at least according to the main industry bodies in Canada and the United States. There are still errors associated with these polls, however, but they are not supposed to be measured in the same way as a randomized telephone poll. Nevertheless, internet polls are designed to be as accurate as their telephone counterparts, so should be expected to perform as well. And they usually do.

But how can 1,000 people reflect the opinions of a country with a population of 35 million?

It might be hard to believe, but it is mathematically possible. A smaller sample will, of course, have a harder time reflecting the population accurately. But a poll of 1,000 people is generally considered the standard size. Larger polls have smaller margins of error, but the return on that extra effort is smaller is well. Doubling the sample size does not cut the margin of error in half, for example.

One common way to explain how sampling works is to imagine a pot of soup. The pot of soup contains a large number of different ingredients mixed together. That is our target population. If you dip a spoon into the pot and taste it, the spoonful will likely taste like the rest of the pot of soup. That’s our poll.

You don’t need to eat the entire pot to know what it tastes like. The odds of getting a spoonful that is completely unrepresentative of the entire pot of soup is low - and it is the same with polling samples. If the pot of soup has been mixed together properly (or if a sample is collected randomly), a small sample of it should be reflective of the entire pot (or the entire population).

B.C. Premier Christy Clark, on election night May 14, 2013. (The Canadian Press)

How accurate have polls been in predicting election results?

They can do very well, and in most elections they reflect the outcome quite closely. There have been some notable misses, of course, such as the provincial elections in Alberta in 2012 and British Columbia in 2013. But while the polls might have missed those outcomes, they did choose the winner in the most recent provincial elections in Saskatchewan, Manitoba, Ontario, Quebec, New Brunswick, Nova Scotia, Prince Edward Island, Newfoundland and Labrador, and federally. Cases like Alberta and British Columbia are so notorious because they are rare.

And there were some reasons for those misses. In Alberta, most pollsters stopped polling almost a week before the election, and so may have missed a late shift in voting intentions. In British Columbia, there was great difficulty in modelling the voting population.

There are degrees of accuracy as well. The polls in the last provincial election in Ontario were mixed, though most had the Liberals winning – just not necessarily with a majority. In the 2011 federal election, every poll at the end of the campaign put the Conservatives in first by a comfortable margin and the NDP in second (and about to make a huge breakthrough in Quebec). No poll gave a strong indication, however, that the Conservatives would win a majority government.

In other cases, however, the polls can do remarkably well. This was the case in the last elections in Nova Scotia and Quebec.

It is impossible to know beforehand which elections will be polled well and which will not. Often there are reasons intrinsic to each campaign that can help or hinder the accuracy of the polls. But misses like Alberta and B.C. are very rare – only marginal errors should be expected in most cases.

Who are the polling firms that work in Canada?

Canada has a limited number of national firms, as well as some firms that do regional polling.

Pollsters that conduct national polls include Abacas Data, Angus Reid Global, EKOS Research, Forum Research, Harris-Decima (now part of Nielsen), Ipsos Reid, Léger and Nanos Research. Infrequently, other polling firms release national numbers as well.

At the regional level, there is, among others, Insights West and the Mustel Group in British Columbia, ThinkHQ in Alberta, Insightrix in Saskatchewan, Probe Research in Manitoba, CROP in Quebec, and the Corporate Research Associates in Atlantic Canada.


Produced by digital politics editor Chris Hannay. Graphic by Murat Yukselir.