Skip to content

Making sense of pre-election polls: 7 practical tips for political journalists

January 12, 2024
Image 1

Joris Boonen

Political Scientist

2024 announces itself to be an exceptional election year. Approximately half of the world's population will have the opportunity to vote, including European voters participating in the EU elections in June. In the lead-up to these elections, a series of pre-election polls will - as always - give direction to a lot of the news coverage. Nowadays, such polls are inextricably linked to election campaigns, also thanks to the rise of data journalism and the availability of user-friendly and interactive polling platforms. But, as a journalist, how do you deal with these polls in a responsible and transparent manner? And which conclusions can you draw from them? This article lists a few practical recommendations for political journalists.

Pollsters have been heavily criticized for 'getting it wrong' in some recent unanticipated high-profile election outcomes. Those include Trump's victory and the Brexit vote in 2016, as well as the more recent victory of Geert Wilders in the Netherlands. This has prompted fundamental questions in newsrooms regarding the value and relevance of coverage on pre-election polls. Some newsrooms have even gone so far as to ban pre-election polls from their coverage altogether, although scientific research on the direct impact of election polls has shown a rather limited or even absent direct impact of polling results on voting behaviour.

Some newsrooms have gone so far as to ban pre-election polls from their coverage altogether.

While these criticisms are not new, it is important to delve into their underlying causes and consider how to address them as a journalist. The issue with pre-election polls is often not the result in itself - provided that the poll is well executed and reported with the appropriate nuances and technical background information by the responsible research firm or university. The biggest challenge lies in the interpretation; the larger conclusions we tend to draw from them and the narratives that go along with this. In that sense, journalistic reporting on election polls can also create a political reality in itself, by creating preliminary ‘winner’ and ‘loser’ frames in the lead-up to the elections, which might, in turn, influence the behaviour of political parties and their candidates.

To handle pre-election polls responsibly and to draw fair but newsworthy conclusions from them, we offer seven practical pieces of advice on polls in journalistic reporting.  

1. Embrace uncertainty in words and visualizations

Every poll leads to an estimate that comes with its own level of uncertainty. And although we all know this, it is not enough to mention that “we should be careful in interpreting”, or that it is “just another poll”. These general nuances don’t tell us a lot about the level of uncertainty we are actually dealing with. As a journalist, you can also show that uncertainty explicitly, both in words and through visualizations. For instance, by avoiding predictions such as “If the EU elections were to be held today, party X would gain 5 new seats”. This approach wrongly creates the impression of fixed facts or a true prediction of an electoral outcome. If you want to be transparent about uncertainty, you could rather choose to write something like “according to this new poll among 1.000 respondents, about 18 to 22 percent of the voters currently intends to vote for party X”.

You can also clarify uncertainty in your data visualizations. Each estimate has a margin of error that you can place above and below every estimate. This margin of error also helps you as a journalist to not jump to conclusions and to interpret small shifts between two polls as relevant differences.

2. Do not present the numbers as more exact than they are

We are programmed to read numbers in an exact sense. Any number suggests a certain degree of exactness, and commas even make that suggestion stronger. It is precisely because we are working with estimates and margins of error, that decimal places are generally irrelevant for reporting, so it is better to leave them out altogether. The weather forecast also does not predict that it will be 14.35° C tomorrow in your region. Not just because the weatherman does not know this precisely, but also because he wants to make sure that you will still trust him tomorrow when it turns out that it is only 13° C.

3. Read the survey questions as critically as the outcomes

When interpreting outcomes of a new poll, you should critically analyse the results, but also question the wording. As a journalist, you know better than anyone how certain questions can nudge someone in a certain direction. That often happens in political surveys, although more subtly and of course not always deliberately. This type of unintentional steering can be caused by both the question’s wording, as well as the available answer options.

As a journalist, you know better than anyone how certain questions can nudge someone in a certain direction.

For example, if a survey would include a question such as “Will the Ukraine war affect your vote in the upcoming EU elections?”, with possible answers being “Yes, I’ll let that weigh heavily”, “Yes, I’ll let that weigh a little”, “No, that doesn’t play a role for me”, or “I don’t know yet.” With two Yes’s and one No, the list of answer options is subconsciously steering respondents towards a positive answer.

So when you receive the results of any news survey, carefully check if the questions are asked in a neutral manner, and whether the answer options are balanced. It often helps to take a moment to think about how you yourself would respond to the question in order to assess its quality.

4. Investigate the sample: who did and did not participate in the poll?

Every poll asks for the opinion of a subset (or sample) of the entire population. It is therefore very important that the sample is a good reflection of the entire population. Your chance of such a representative sample is largest when you randomly draw names from the entire population, and if those selected candidates in turn all want to participate. That almost never happens in polls, so researchers do their best to get as close as possible. Research firms typically do that with selected panels. These are large groups of available volunteers they can draw from at any time, and who represent different groups in our society. Those respondents are usually paid or rewarded in some way for doing this, so it is also important to check who they are and what their incentive is to participate.

In some cases, polls are conducted on an entirely voluntary basis, for instance among readers of a specific medium. And while size does matter, more respondents does not always equate to a better reflection of the entire population. The motivated readers of a political website are not the same as “the European voters”. Not even if there are 100.000 of them in your survey.

5. Don’t overstate differences or trends

Differences are interesting, and so are trends. When we see a trend line going up or an interesting new difference between two groups of voters, we often have an immediate tendency to focus on that shift or that difference. Small differences in a sample are often just not statistically significant, because we have to deal with errors around our estimates.

Changes are more newsworthy than stability. It is part of the ‘election race’ frame in news reporting that we look for the winners, the losers and the interesting shifts. But a small difference in a small sample is not the same as a small difference in a huge population.

An easy way to think about it is as follows: imagine you were to conduct two polls on a particular candidate or party on the same day, both among 500 respondents, and you would see a small difference of about 2 percentage points between those two polls. In that case, you would probably consider this difference as normal noise, as both results are still in the same expected range for that particular party or candidate. You are working with two small groups of voters, so of course you will not find the exact same number in those two sub-samples, or it would just be coincidence. The same logic applies to two polls that are conducted a few weeks apart. The chance of finding the exact same result in those two polls is very small, for one because you are working with a new sample. This means you would have to apply the same level of cautiousness when interpreting results. Is it the regular noise you would expect in the sample, or is this a statistically significant difference? Quite often, it is the first.

That is also why a series of many different polls will give you a better estimate than a single shot, and where ‘polling aggregators’ come in handy. They combine all available information from different pre-election polls to draw a more nuanced and aggregated picture of polls over time. That way they can give you a better estimate of possible trends and differences.

6. Look beyond the average

Outcomes of political surveys are often summarized in average scores, such as average trust scores in national governments. Mean scores are rewarding and intuitive measures that makes it easy for us to compare groups or measurements. But while they are simple to read, they often lack nuance.

An average score can give the impression that we as a group share a collective opinion and our news reporting can reinforce that idea. A shift in that average gives the idea that we as a group are moving in a certain direction (E.g. “Trust scores have dropped from 3,9 to 3,3 out of 5, new poll finds”). As if we as a society are all making a shift to the left or a shift to the right. But it is often not that simple.

When you write about a shift or change in the mean, for example, also consider the underlying patterns, if these data are made available. Where are the real shifts in the whole group? To what extent are the responses spread around that mean? Readers might be more interested in this background than you think, and it can add more nuance to your story.

7. Be as transparent as possible about the source of the data

News reports on political polls usually link to the full research report or a brief description of the data in a separate box. This is excellent journalistic practice, as it allows a motivated reader to look into the full survey and navigate through the data themselves. However, only a small minority of your very engaged readership will do so.

Where possible, don’t just incorporate that content in a separate box or technical paragraph. You could also try to include the story of your dataset and its potential shortcomings in the main article, for instance by telling your readers in your own words how the data were collected. They will also be interested to know whether the results are based on 80 people who were interviewed on the street, or a random selection of 1.000 citizens who received a personalized invitation in their mailbox. It helps your readers to put those numbers in context and ask critical questions about them. Being open about the strengths and weaknesses of the data only makes your piece better. After all, your job as a journalist is not to make the findings of a new poll sharper, better or more exciting than they are. Your job is to put them into context in the clearest possible way.

Hopefully these tips can help you when using polls in your reporting on 2024’s many elections.

Disclaimer image

adding all to cart
False 0
File added to media cart.