Center for Strategic Communication

Mark Blumenthal had a long and very interesting analysis on the issues surrounding the Gallop polling of the last U.S. Presidential Election for Huffington Post.

As most know, the Gallop tracking polling before Nov 7 did not give an accurate picture on what was happening around the country – in fact they showed that Mitt Romney would win.

Mark notes:

Obama prevailed in the national popular vote by a nearly 4 percentage point margin. Gallup’s final pre-election poll, however, showed Romney leading Obama 49 to 48 percent. And the firm’s tracking surveys conducted earlier in October found Romney ahead by bigger margins, results that were consistently the most favorable to Romney among the national polls.

In the analysis Mark goes through what could have cause such errors – from how the samples were collected to the balance in the electorate that Gallop used.

Mark also notes the need for transparency in such methodologies.

I recommend that anyone who is interested in polls and the measurement of wider opinion read the article.

He concludes by saying:

Given the scrutiny that has fallen upon pollsters for last year’s presidential predictions, let’s hope the “interested parties” include all of us.

As a national security think tank we must ask, do such issues raised in this analysis have wider concerns than just the US presidential election?

With millions of US taxpayer dollars being spend on opinion polls around the world for the US State Department and US Department of Defense (and other agencies) there is a need to put such surveys into the right context – to understand their limitations as well as what not to use them for.

Opinion research in Afghanistan provides a good example of some of the problems.

Political leaders from around the world have used polling reports from Afghanistan to sway domestic opinion, to try and judge if programs have been “successful,” and more crucially have been used to make key political and military decisions in Afghanistan.

These reports have included such questions as “Right Direction,” how Afghan feels about the Unites States and NATO, or how Afghans feel about a certain political leader or institution.  You can check out many organizations that have produced such products.

These polling reports may be appropriate to a country such as the U.S. or the UK, where there is a deep understanding of the cultures and grounded background data (such as SIMMONS and census information) that allows sampling and trending to be possible – and even then, as Mark Blumenthal notes, they can be significantly wrong – and lead to wrong decisions.

In Afghanistan – the cities, provinces, districts, villages where we try and influence and conduct public diplomacy there is no such basic data.

Without the in-depth background data such census reports, it is impossible to glean significance from polling and other quantitative research.

We do not know for certain how many people live in Afghanistan, what is the ethnic make up of the country or individual political unit, different age ranges etc. – many of the figures used by research organizations could also be subject to highly political influence (e.g. the ethnic make up in the country).

So in fact because there is not the sophisticated and base line of population modeling, sampling research cannot be accurately projected to a wider population.

The results of polling then are very unreliable. All they give us is what the three thousand or so people told a pollster at that very moment.

Unfortunately this can lead to dangerous conclusions and poor decision-making. I would recommend our report last year on measuring success in Afghanistan – here

We must recognize the limitations of opinion polling, and move on from it.