[I chose the picture above to depict the dynamics of ceteris paribus “all other things remaining the same” – frozen in time, like a picture.]
I decided I will go over the EdExcel Economics A (2015) specification step by step for a while. My son is doing an A level next year in this subject, and, as a former teacher and textbook writer, I find it helpful to go over it so I am better informed. What better way to get a grasp on something than to write about it. So here we are. To find out in how much depth the concepts from the specification are covered, I use Alain Anderton’s Edexcel AS/A level Economics book, which my son bought for school.
Although there are topics I like to discuss more than others, for now I find it most helpful just to start at the beginning of the specification and work from there, but that may change. I will let you know when it changes, although I do allow myself some diversions every now and then.
So I kick off with theme 1 Introduction to Markets and Market Failure, 1.1 Nature of Economics, 1.1.1 Economics as a social science1.
I think it is important that students understand assumptions, like the ceteris paribus condition, are made when modelling. I think this is important, because to be able to assess the implications of the outcome of a model, it is important to know its assumptions. From experience, I know we, teachers, spend ample time to discuss how prices are affected when other factors of demand, like income or the price of other goods change, but do we also discuss this from the perspective of assumptions being wrong?
You are likely aware that quantity demanded could be written as a function of all factors: qv = f(p, p1, p2, i, t, n), where p is the price of the goods itself, p1 the price of substitute good 1, p2 the price of complementary good 2, i income, t taste, and n the number of buyers. This function could for instance be described with qv = -0.5p + 0.2p1 – 0.3p2 + 1.0i + 0.05t + n, which is completely arbitrary, and just for discussion’s sake.
When finding more information on the subject, I stumbled upon an article by David Freedman2, Why Economic Models Are Always Wrong3, in which he lays blame for the 2008 credit crisis with the practice of modelling in itself, or, to be more accurate, with calibrating models. He writes: “[..]What if there were a way to come up with simpler models that perfectly reflected reality? And what if we had perfect financial data to plug into them? Incredibly, even under those utterly unrealizable conditions, we’d still get bad predictions from models. The reason is that current methods used to “calibrate” models often render them inaccurate.”
Calibrate. It means “to mark units of measurement on an instrument such so that it can measure accurately”4. What needs calibrating in a model? The parameters which indicate the relationship between the variables (p, p1, p2, i, t, n), and the outcome qv. In the article Freedman describes a method geo-physicist Jonathan Carter used to improve the predictability of the geophysical models he used. To calibrate a model he devised himself, he used actual historical geophysical data. The outcome of his model was dependent on several variables, like qv is dependent on several demand factors. Freedman writes: “He [adjusted] the parameters so that the model would have “predicted” that historical data”. In my example this would mean changing the numbers -0.5, +0.2, -0.3 etcetera.
As it turned out there were many different sets of parameters that seemed to fit the historical data. Given a mathematical expression with many terms and parameters in it, and thus many different ways to add up to the same single result, this makes sense. It is not feasible to make statements about the relationship between two variables on their own, since it is not possible to isolate these two from the rest in historical data. Anderton5 writes: “It is often not possible [in economics] to establish control groups to conduct experiments in environments which enable one factor to be varied whilst other factors are kept constant.”
On the other hand, computer processing power allows us to evaluate all combinations of parameters in one run. But what should we do with the outcomes? If all possible sets of parameters deliver the same or similar results, this would be something to go on. But what if their are outliers? What if the outcomes contradict each other?
Calibration in itself is a good thing. If we calibrate we do not shut our eyes to new information – recent historical data -, and we aim to improve our method. But if we have to calibrate often, does that not testify to a bankrupt method? Freedman writes: ” “When you have to keep re-calibrating a model, something is wrong with it,” he [Wilmott6] says. “If you had to readjust the constant in Newton’s law of gravity every time you got out of bed in the morning in order for it to agree with your scale, it wouldn’t be much of a law.” ” Therefore, Economy, as a science, does not have any laws.
You could reason, that a model based on – historical – data is not based on assumptions. However, the assumptions are made when contributing weights to variables. Then again, when you have a lot of historical data – big data – one solution could surface. But even then the model is based on assumptions – the assumption that historical data are predictive of the future.
I must admit, I do not know how often economic models are right in their predictions. As for now, models are extremely useful to me as a way to think about certain principles and relationships in isolation to gain a better understanding.
How can you use this in your lessons?
I think, when I teach a model, I should be aware of its history, its assumptions, and its practical use. This could be reflected in assignments. What would for instance be the outcome of a policy measure, if the model was different? So, from now on, I will be on the lookout for such opportunities, and so could you.
- A further elaboration of the specification can be found in the specification itself on the Pearson site.
- David Freedman is the author of Wrong: Why Experts keep failing us – and how to know when not to trust them, 10 June 2010, Little, Brown and Company
- Why Economic Models Are Always Wrong, 26 October 2011, David Freedman, Scientific American
- Definition from Cambridge dictionary
- Alain Anderton, author of Edexcel AS/A level Economics, 6th edition, Pearson, 2015
- I assume Freedman is referring here to Paul Wilmott of Wilmott, a quantitative financial analysis company