fbpx

Be wary of the psephology: Why opinion polls should be taken with a pinch of salt


In the US, one opinion poll after another suggests that Joe Biden has consolidated his lead over Donald Trump in the coming presidential elections.

Psephologist Nate Silver, who has a model that takes into account various state and national opinion polls plus other data, declared in mid-October that Trump has only 1-in-7 to 1-in-8 odds of winning the presidential elections. The Economist gives Trump a 1-in-10 chance of winning.

Can Biden afford to lower his guard and relax? Only at his own peril. In 2016, most opinion polls favoured Hillary Clinton, the then-Democrat presidential nominee. That race was much keener. A week before the elections, Silver gave Clinton a two-third chance of winning — and this was among the most conservative predictions. Others gave Clinton as high as an 87% chance of winning.

She grew so confident that she neglected the three, large but closely contested, battleground states — Pennsylvania, Wisconsin and Michigan — in the last phase of her campaign, and lost them to Trump by a small margin. She, instead, campaigned in Arizona, hoping to get Democrats to win the House as well as the presidency, and ended up losing both.

Technically, the forecasters could claim they were not wrong. They had given Trump a small, but non-zero, chance of winning. Besides, in defence, they say that Clinton did win three million more popular votes than Trump did. True, but that’s a weak excuse.

Everyone knows that votes in the electoral college, and not the popular vote, determines the US presidency. Experts acknowledge that in 2016, three factors caused the opinion polls to diverge from the actual electoral outcome: indecisive voters, who ended up voting more for Trump than Clinton; a flawed sampling frame, which gave insufficient weight to low-educated voters, who backed Trump in larger numbers; and a significant proportion of those who did not respond to pollsters.

This time, pollsters have changed their sampling design to increase the representation of low-educated voters. But there is not much they can do about the undecided voters and non-responders. If a poll has a margin of error of 3% for the entire sample, it can change the margin between the two candidates by as much as six points.

So, a chance of 52% means the candidate can get anything from a winning 55% to a losing 49%. However, Biden’s average lead from various polls is over 10 points. That implies victory even after accounting for the margin of error. So, what can go wrong?

Nationally, polls show that 6% of voters have not decided whom to vote for. In battleground states, their proportion is higher. If almost all of them decide to vote for Trump, Biden’s lead would decline in the battleground states, and Trump may even win.

More important is how non-responders will vote, something most polls don’t provide data on. They conveniently assume that non-responders would vote in the same way as responders. But that is rarely the case. The classic example of how polls can go wrong on account of non-response is the 1936 US presidential election, held in the middle of the Great Recession, between President Franklin D Roosevelt and the Republican Alf Landon.

The Literary Digest, with a well-established reputation of predicting elections, conducted a poll that gave Landon a 3-to-2 victory. The election result could not have been more different. Roosevelt received 98.49% of the electoral votes, which remains the highest percentage of votes won by any candidate since 1820.

What went wrong? Well, the Literary Digest had mailed out 10 million sample ballots, but only 2.3 million responded. Clearly, while pro-Roosevelt voters did not feel strong enough to respond to the poll, anti-Roosevelt voters responded in large numbers.

Most polls, like the Literary Digest in 1935, do not generally highlight boring data such as response rate. Commentators also often refrain from finding pitfalls in the polls on account of poor response rate. Polling techniques have advanced substantially since the Great Recession. But glitches remain.

Researchers Houshmand ShiraniMehr, David Rothschild, Sharad Goel and Andrew Gelman in a 2016 study (bit.ly/34mU9gM) examined 4,221 late campaign polls for presidential, senate and governor races between 1998 and 2014, and compared the findings from the polls with the actual election results.

They found a difference of 12-14 points between the poll predictions and actual election results, twice the historical margin-of-error range of 6-7 points for opinion polls. The title of Rothschild and Goel’s subsequent article in the New York Times (nyti.ms/31uv8OP) is quite instructive: ‘When You Hear the Margin of Error Is Plus or Minus 3 percent, Think 7 Instead.’

Pollsters in India have made worse glitches in predicting elections. Yet, these glitches do not reduce the demand for polling predictions. Forecasts are instantly consumed, digested, and mostly forgotten by the next election.

The fact is that when the future is uncertain, the demand for astrologers increases, and this increase in demand remains high despite a plethora of false prophecies.

During long election campaigns, people need new things to discuss to avoid repetitive boredom, and differing opinion polls produce figures that have as little validity but create as much excitement as astrological forecasts.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

PHP Code Snippets Powered By : XYZScripts.com