Denne saken er en av tre i et symposium om meningsmålinger. Tom Louwerse deler fire lærdommer om hvordan bruke meningsmålinger basert på sine erfaringer fra Nederland.
Av Tom Louwerse*
Media and voters love opinion polls. They are quick snapshots of where parties stand, that allow journalists to portray politics as a horse race: who is winning, who is falling behind? As a political scientist, I have what one might call a love/hate relationship with the polls. Polls are great tools to measure public opinion. By interviewing just 1000 randomly selected people, you get a fair impression of what voters want. Unfortunately, polls are often misused: coverage often focuses on small changes in party support that might as well be random noise. Yes, I love polls. But I also love my toddler son, but I don’t expect him to play the piano and bake me a cake. We should have realistic expectations, also about polls.
To help with a better use in opinion polls, I started pooling opinion polls in the Netherlands (‘Peilingwijzer’) about 5 years ago and also ran a similar project in Ireland, where I worked for a few years. This was based on excellent methods first developed by political scientists in other countries, most notably Simon Jackman who did something similar for Australia. These are my four lessons of how to use opinion polls, based on that experience.
- Forget small changes, focus on the trends
All too often, people get extremely excited about poll results that show one party up by two points and another down by one. Journalists bend over backwards to find explanations for these changes in the party’s fortunes. Small changes like these are, however, usually well within the ‘margin of error’ of an opinion poll. If popular support for a party had in fact not changed since the last poll, we might still very well observe a small loss or gain for that party. That’s because opinion polls are based on responses by usually 1000 – 2000 people. Just due to chance, we might have a slightly higher percentage of Labour Party voters in the first poll compared to the second. So, forget small changes.
We can, however, focus on the trends in opinion polls. If multiple pollsters show a similar trend for a party, we can be reasonably sure that this is a true reflection of changes in public opinion. That’s why aggregating opinion polls makes sense: they are all bits of information about the same thing: party support. By combining all available information, one can reduce the ‘noise’ inherent in the polls and focus on the trends.
- It matters who conducts a poll
Differences between opinion polls are not just due to random error, however. Pollsters each use their own methodology to correct for sampling issues, such as non-response (people don’t want to participate in a poll). This results in their polls to be consistently different. One example is displayed below, which concerns the largest Dutch party: the conservative-liberal VVD. The dots are individual polls, colour-coded by pollster. It is easy to see that the blue dots are consistently higher than the red ones, meaning that Ipsos (blue pollster) had consistently higher estimates for the VVD than Peil.nl (red pollster). You should take these ‘house effects’ into account when comparing polls done by different polling companies.
- Opinion polls are not designed to predict election outcomes precisely
If polls were exact predictions of elections, countries would probably use them instead of elections. But they are not. One reason is that polls are measurements of voting intentions at a given moment in time. People’s preferences might change. Thus, polls taken months before the elections might accurately reflect people’s intentions at that time, but campaigns or events can change their minds.
This is, however, not the only problem. In recent years, we have seen a number of large polling misses, most notably in the United Kingdom in 2015. It is quite difficult to conduct a good poll, quickly and at low cost. Response rates in telephone polls are low – with some groups happier to participate than others, resulting in biases in the poll. And non-random sampling used for internet polls presents companies with new challenges.
- Note to self: it’s hard to change patterns of behaviour
My ‘pooling of the polls’ in the Netherlands has been quite successful in receiving popular attention. The public news broadcaster has picked it up and now uses the Peilingwijzer exclusively for their reports on (party) opinion polls. And other news outlets have also said they would be more careful in reporting of opinion polls during the election campaign earlier this year.
The positive news is, for the most part they have. At least those newspapers and broadcasters who intended to be more careful have indeed shown restraint. But the lure of a quick, sexy headline is always present, especially when it concerns polls from other countries or opinion polls about different topics. When it is not a Dutch party poll, some Dutch journalists seem to forget everything they learned about error margins, sample size and house effects.
In that sense responsible poll reporting requires constant work. Political scientists can help by providing polling tools and explaining what they (don’t) say in clear, accessible terms. Political journalists should keep their knowledge on polling at a minimum level, so that they understand what they are reporting on. And the public might take a bit more interest in the substance of election campaigns rather than the ‘horse race’. This will help to put opinion polling in the right place.
*Tom Louwerse er førsteamanuensis i statsvitenskap ved Universiteteet i Leiden, Nederland