Statistical methods rarely have cool names.
Tibishrani’s lasso, and ‘bootstrap’ methods, are perhaps the only exceptions.
So we’re going to follow Andrew Gelman’s usage and talk about Mister P instead.
Mister P is a way of estimating opinion in small areas that’s more accurate than directly estimating opinion from tiny samples in the way we discussed a couple of weeks ago.
The Mister P method has a number of features, so we are going to break down our description in to two posts, dealing with the ‘regression and post-stratification’ part in this post, and the ‘multilevel’ part next time.
The `regression and post-stratification’ part of Mister P draws on the two-stage `simulation’ approach used by Pool, Abelson and Popkin to estimate state-level opinion back in 1965.
The first step is to build a statistical model — a regression model — of the opinion you’re interested in, using a national sample.
The precise contours of this model will depend on what opinion you’re interested in. But for our purposes — and bearing in mind what’s coming next — a model of vote choice which uses certain demographic variables as predictors — age, gender, occupation, education, and housing and marital status — will do fine.
Models of vote choice like this can be used to make predictions for particular values of our predictors. Generally, political scientists don’t talk much about predictions — we’re more interested in hypothesis testing; and we rarely have fresh data for which we can generate genuine predictions instead of slightly artificial `retrodictions’. But all of the statistical models we use can make predictions to varying degrees of accuracy. So, based on our model, we can predict the probability of a 55 year old university-educated male working in a managerial role voting Conservative.
We need these predictions for the last step of Mister P: post-stratification. Imagine that we’ve got a really simple model, which just has gender and university education. Using census data, we could draw up, for each constituency, a tally of all the people who satisfy one of the combinations of the variables in this model.
So, we’d tally up university-educated males, university-educated females, non-university educated males, and non-university educated females. If we were thinking visually, we could even put them in a two-by-two table.
And then we could make a prediction for each box in that table. We’d take the predicted probability of each person of that description voting Conservative, and multiply it by the number of people in that box. If we do that for every box, we can predict, for each constituency, how many people vote Conservative there.
Now, our models are a bit more complicated. Instead of two-by-two boxes for two variables, each of which have two categories, we have 2x7x5x2x2x10 tables, which are pretty awkward to work with. That’s 2800 different `types’ of people in each constituency — or 1.82 million different predictions.
This method restricts us to models which can make predictions based on things that are asked about in the census. Other factors — early-years socialization into voting for a particular party, post-materialist values — all that gets swept under the carpet. But as long as there is some association between people’s demographic characteristics and their political opinions, we make some headway.
We will see in the next post how using multilevel regression in the first step can improve our small area opinion estimates by making better use of all the information in our survey data and by allowing us to include extra information on constituency characteristics.