Denne saken er en av tre i et symposium om meningsmålinger. Chris Hanretty deler sine erfaringer med valgprognoser, og hvorfor han ønsker å fortsette med det, til tross for at han ikke traff blinken ved to nylige valg i Storbritannia.
Av Chris Hanretty*
A writer on management once quipped that «making mistakes simply means that you are learning faster». In this spirit, I like to think that I’ve learned a lot – and fast – about election forecasting over the past couple of years. I’ve been involved in two reasonably well known election forecasts in the UK, and in both cases my forecasts left a lot to be desired. In 2015 I ruled out a Conservative majority – but the Conservatives substantially exceeded my expectations, and won a majority. In 2017, I expected the Conservatives to hold on to, or increase their majority – but instead they contrived to lose it. I’ve therefore both under- and over-estimated the UK’s largest political party. I like to joke that my forecast gets the Conservatives right – but only on average.
Despite these chastening experiences, I’ll likely continue to produce election forecast. I think that there are three main reasons why I continue forecasting.
Why do forecasting?
First of all, it’s fun. I have interacted with a lot more people as a result of my election forecasts than I ever have as a result of my academic articles. When I produce forecasts, I get an immediate reaction, very different to the kind (and pace) of reaction you get from journal publishing. Although it’s generally a bad idea for academics to pursue instant gratification, some attention is nice.
Second, forecasting helps improve my technical skills. Election forecasting is an irredeemably quantitative exercise. Each time I’ve done election forecasting I’ve learned about new quantitative techniques. My 2015 election forecast grew out of work I did with Ben Lauderdale and Nick Vivyan on estimating constituency opinion using multilevel regression and post-stratification (MRP), which is now a major growth area in public opinion research. For my 2017 forecast, I ended up learning a lot about compositional data analysis, which is necessary to ensure that the forecasts for each party always sum (neatly/elegantly/without hacks) to 100%.
I have emphasised technical progress rather than theoretical progress. My forecasts are based, to a large extent, on opinion polls rather than any supposed «fundamentals». It’s possible to think of forecasts on a continuum from resolutely theoretical to more empirical in nature. The theoretical end is populated by forecasts which rely on a grand theory of vote shares – most commonly some version of the cost-of-governing thesis (i.e. that parties are ‘punished’ with fewer votes when they were in government). Although I find these theories very interesting, I don’t know how interesting the resulting forecasts are. It’s a well-supported empirical generalisation that governing parties lose just over two percent of the vote – but a forecast which simply predicts this doesn’t seem particularly interesting.
Third, forecasting helps in adopting a particular scientific mindset. In forecasting, you must set out a particular method of analysis before new data (the election results) arrive. This is analogous to pre-registration of research designs. Pre-registration helps because it forces us to think harder and more clearly at the early stage of our research, and because it prevents (or makes much harder) any post-hoc changes to our theories made in order to ensure statistically significant or novel results.
There are obvious risks in election forecasting, as I’ve found out. Forecasts based to a large extent on public opinion polling introduce a point of failure over which the forecaster has no or limited control. If forecasting acts as a «shop window» for quantitative political science, then our product might look good or bad not because of what we have done but because of the success of the polling industry. In my experience, however, polling failures have spurred collaboration between polling companies and researchers in an effort to find out what went wrong.
Catering to a demand
I don’t want every political scientist to be an election forecaster. I don’t even want election forecasters to concentrate on that to the exclusion of other areas of research. That would be a monumental misallocation of talent. But I do think there is value in using our skills to produce indications of what is likely to happen. People value this: just look at the amounts of money that broadcast organisations are prepared to pay for exit polls, which allow us to know the likely result just hours before we know the actual result. It’s rewarding (in many different ways) to cater to that demand.
*Chris Hanretty er professor i politikk ved Royal Holloway University of London