Over the next few months, we’re going to be describing some of the special techniques that we use to estimate constituency opinion.

Before we do that, however, it’s useful to explain why we have to use special techniques. Why can’t we just use the same polling techniques that we use to get opinion at the national level?

The answer has to do with the accuracy of our measures, and how this changes with sample size. Almost all reputable newspapers, when they print an opinion poll, will present the margin of error.

The normal margin of error you see reported in newspapers gives us a range of values such that, if we were to carry out this exercise 20 times, the true value would be in that range 19 times out of 20. (It might seem weird to talk about the likelihood of being right in nineteen other polls that we haven’t conducted, rather than the likelihood of being right with this actual poll that we have conducted. If it seems weird to you, you might want to look into Bayesian statistics).

Smaller margins of error are better, and smaller margins of error come through sampling a larger number of people. The returns on sampling more people tend to decrease, so you get less bang for your buck sampling 500 people when you’ve already sampled 10,000, than sampling 500 when you’ve only sampled 1,000. Specifically, the margin of error, in the worst-case scenario where opinion is divided 50:50, can be calculated the following way:

- divide one by four times your sample size
- take the square root of this number
- multiply it by a special constant — we’re going to be using 1.96, which gives us a margin of error where we’re only wrong one time out of twenty

So, for a sample size of 1,000, that means 1 / 4000 = 0.00025, the square root of which is 0.01581, which when multiplied by 1.96 gives us 0.03099, or almost 3%.

What does that mean for estimating opinion in constituencies? It means we either need very big samples in each constituency, which is very expensive, or we need very big margins of error.

Let’s assume that we’re working with a really large sample of 40,000 people, which usually only happens when polling companies aggregate polls conducted over several months. Let’s also assume (and this is an inaccurate assumptions, but it’s the best case scenario for us) that our sample is equally divided between all 650 constituencies in Great Britain and Northern Ireland.

If we’re also happy to assume (and this is another inaccurate assumption) that a nationally representative sample is composed of locally-representative sub-samples, then the sample size in each of our constituencies is 40,000 / 650, or 62 people.

The margin of error on that kind of sample is huge: 1 / 4*62 = 0.004, the square root of which is 0.064, and 1.96 times this number is 12.4%. That’s big. It means that we have to say, `our estimate is 50%, but the true value could plausibly be between 38 and 62%’.

And this is starting with huge, huge national samples. If we shrink our national sample to 10,000, then the situation gets dramatically worse.

Unless we’re happy providing very imprecise estimates, or have lots of money to commission polls with huge samples, we need something better. As you might have guessed, we don’t have lots of money, and we’re not happy with imprecision. So over the next weeks, we’ll be describing some techniques for getting more precise constituency estimates with moderately-sized samples.

## Leave a Reply