It's an emergency! For God's sake, get me a social scientist! Why misunderstanding the aims of research is crippling education.

I'm elbow deep in gizzards this week with the number of geese I've slaughtered in the name of prognostication. I haven't developed an emergent tendency towards serial killing; I've just been trying to answer an age-old educational conundrum: do schools need more money? And answering that seemingly simple question led me to question the whole educational research racket, or at least its misappropriation by the people we trust to run the show.

My unconventional approach to divination and revelation was prompted  when the government published school-by-school spending figures along with last weeks' league tables. Although the DfE is being coy, claiming that this publication is purely linked to the aim of greater transparency, we all know that nosey Noras will be asking if schools give value for money. Very sneaky. So how do we know if more money actually leads to better results in education anyway?  A BBC report from the 14th of January looked at the evidence:

'A recent Pisa study from the OECD, compared academic performance across a wide range of countries and offered some support for the government's view that money is not a key factor. Another study, by Francois Leclerque for UNESCO in 2005, surveyed a wide range of other economists' attempts to find a correlation between resources and results. Some found a positive correlation. Others found the opposite. Leclerque concluded that, whichever view you took, it was as much a matter of one's previous belief and opinion as it was of scientific knowledge. (1)

One major study (by Hanushek and Kimko, 2000) looked at pupils' international maths scores and compared them to several different measures of school spending.It is not clear whether spending more on schools leads to better results. Their conclusion was: "The overall story is that variations in school resources do not have strong effects on test performance." (1)

So that's all perfectly clear then. At least we have all the data we need to make a decision. Not.

Think about what's happening here: tens of millions of pounds spent, an equivalent proportion of academic labour, the finest minds in education, all focused on one point, one question, like shining a million light bulbs onto a spot and turning it into a laser. Only to find that all you have is a very bright room, and an army of moths dive bombing the window.

If you turned that focus, funding and fervour on to a physical task, you can imagine the mountains that could be built, or abysses excavated. If it was directed to an object of material interest such as 'how high can a house of cards be built?' then we'd have the answer by tea time and all be driving home in our 1976 Gran Torinos with the overspend. So why the problem uncovering truths in educational research?

The answer lies in the methodology and expectations of social science itself, and their differences with the Natural Sciences: chemistry, physics, biology, astronomy, oceanography, etc- anything that is amenable to the scientific method of study. The social sciences- and I'll be coming back to that term later- is the attempt to replicate  that method in the field of human behaviour. As the latest marketing meme-worm would say, simples.

What is the scientific method? In essence it is based on the following process:

1. Data regarding physical phenomena are collected by observation that is measurable and comparable.
2. This information is collated and a hypothesis is constructed which offers some kind of explanatory description of the events described by the data; to look at it another way, we discern a pattern in the data that offers the potential to predict or define, usually on the assumption of causality, but often with a purely descriptive intent.
3. This hypothesis is tested by experimentation. The hypothesis is then either immediately discarded with the introduction of this new data, or tested again. The more profound and extensive the testing, the less uncertain the hypothesis is claimed to be.

I've simplified the process on a similar scale to describing Moby Dick as 'a big fish' so forgive my brevity. There are long established difficulties with this method that offer challenges to both the philosopher and the scientist: have I tested enough? Is my interpretation of the data biased? Have I collected the data in an ethical manner? Have I performed relevant tests? Are there alternative explanations? Have I mistaken correlation for causality? And so on.

But scientists have one fairly large trump card to play when contesting with chippy Humanities graduates about all this: science seems to work. Your car works; your phone reliably transmits emails of funny dog pictures around the world; planes have a habit of not falling from the skies. If the scientific method isn't perfect, it's the closest thing we've got.

And of course there is a much more profound question: is anything certain? Rationalists like Descartes would say that there are things that can be ascertained by the pure light of reason itself, such as his own existence (in the much misquoted Cogito, Sum). But what about the world? Descartes' argument for the proof of an external world is as convincing as the plot line to My Family, and most people (certainly anyone other than lonely, friendless hermits) turn to our observation of the world as the best basis for understanding how things work: broadly speaking, the empirical approach.

But Hume (certainly one of the most readable of the British Empiricists) famously drove a bus through the empirical claims to certainty, by describing all predictive statements about the world (The Sun will rise tomorrow; water boils at 100 degrees Celsius at sea level, etc.) as inductive inferences. In other words, they rely on our assumption that the future will be like the past, which of course is something we can never test. To understand the importance of this, we can look to the example of Popper's Black Swan Problem; until the discovery of said sooty avian, any European would have said that all swans were white, and they would have had millions of observations over centuries by millions of people to back this hypothesis up. Of course, no hypotheses can ever be established beyond doubt, and any decent scientist is aware of this.

But this isn't a problem of science; it's only a problem of people who misunderstand the scientific method: it never sets out to establish foundational, necessarily true propositions; it only seeks to establish more or less probable hypotheses, nothing more but certainly nothing less. It's enormous success has led many people to become acolytes of this New God, ascribing to it the infallibility normally reserved for the theistic God or his chosen representatives. But science doesn't make these claims. It simply observes, records, considers, and reflects. And when something seems to work, it runs with it. No other method comes close to its predictive and descriptive powers, so until something better comes along, we work with it, and ignore the spoon benders and the homoeopaths who chant and caper, and believe that because empirical scientific claims lack certainty that they can be contested, dismissed and replaced with their own particular and peculiar branches of witch craft and ju-ju.

Which brings me to social science finally, and its germane offspring, educational social science. The desire to apply the methods of the natural sciences to the social sphere is entirely understandable; after all, the benefits that have been obtained from the laboratories and notebooks of the men in white coats have given long life, comfort, leisure time and most importantly, Television and Mad Men. Imagine the benefits we could glean if we turned our microscopes and astrolabes away from covalent bonds and meteorological taxonomy and towards the thing we love and value most: ourselves. Cue: psychology, anthropology, history, politics, educational theory, etc. Now all we have to do is send out the scientists, and sit back and wait for all that  lovely data to be turned into the cure for sadness, the end to war, the answer to life's meaning and while you're at it, how best to teach children.

And yet, here we are, still waiting. The example I gave at the start of this article serves as just one illustration. For every study you produce that demonstrates red ink lowers pupil motivation, or brings them out in hives or something, I can show you a study that says, no, it's green ink that does the trick. For any survey that shows the benefits of group work, there are equivalent surveys that say the same about project work, or individual work, or the Montessori method, or learning in zero gravity or whatever. It is, to be frank, maddening, especially if you're a teacher and on the receiving end of every new initiative and research-inspired gamble that comes along. The effect is not dissimilar to being at the foot of an enormous well and wondering not if, but how many buckets of dog turds will rain on you that day, and how many soufflés you'll be expected to make out of it. To quote Manzi:

'Unlike physics or biology, the social sciences have not demonstrated the capacity to produce a substantial body of useful, nonobvious, and reliable predictive rules about what they study—that is, human social behavior, including the impact of proposed government programs. The missing ingredient is controlled experimentation, which is what allows science positively to settle certain kinds of debates.'(2)

And that, I think, summarises the problems teaching has with the terrifying deluge of educational research that has emerged in the twentieth century and beyond, and the apparently awful advice that has drenched the education sector for decades with its well-intentioned by essentially childish misunderstandings. When I entered the profession I met many old hands who would greet each new initiative with a pained, 'Not that again,' expression in the style of Jack Lemmon chewing tinfoil. At first I thought they were merely stubborn old misanthropes, but now I see that they were at least partially motivated by desensitisation; that they had sucked up scores of magic bullets and educational philosopher's stones catapulted at them over the decades, and had learned to wear tin helmets to deflect as many of them as possible. None of this justifies ignoring new ideas, but it's easy to understand why teachers become immune to the annual initiative.

And yet, even this is to be unfair about the nature of social scientific research and its alleged conclusions. In the field of Religious Studies, for example, I find an enormous deficit of research that claims to point to anything intrinsically predictive or definitive. Much of the research in this area is acutely aware of its limitations, possibly because of the explicit understanding that any discussion of faith matters automatically put one in the proximity of discussions about truth and validity, opinion and subject bias. Of course, there is a lot of bogus research that deserves to be laughed at too, but it's interesting that in a field so contested one should find such care. Social science only gets itself into hot water when people take its findings as more than what social scientists would actually claim, namely that it possesses any kind of claim to finality and certainty.

Any good piece of social science I have read relating to education is always upfront about the limitations of its method of testing; is always tentative in its assertions, and always hesitates to assert anything substantially beyond the data obtained. But I have also read a great deal of bad research that appears to think itself a branch of physics: this method, it thunders, produces this result. A key problem here is what might be called high causal density: when we attempt to ascribe a social phenomena to a particular causal precedent, we immediately run into the problem that any one behaviour (such as improved grades or behaviour) is extremely hard to trace back to a given event; there are enormous numbers of factors that could correspond to the outcomes under examination. Thus, if I introduce a new literacy scheme in school based on memorising the Beano, and next year I see a 15% rise in pupils obtaining A*-C in English GCSE, any claim I made that the two were connected would have to wrestle with other possible claims, such as the group being observed were smarter than previous groups; or they had better teachers; or they were born under a wandering star, ad infinitum. This causal density is particularly noticeable in endeavour that studies human behaviour, with its multitude of perspectives, invisible intentions and motives. Put simply, people are infuriatingly difficult to second guess and predict.

The position is similar to the weather forecasting. We might be able, broadly speaking, to predict that Winter will be colder than Summer. But anything much more specific than that gets harder and harder; even the Met office doesn't issue long term forecasts any more; there just isn't any point. And their daily forecasts update every few hours or so; that's because the factors involved, while potentially measurable in principle, are just too complex and numerous to be done in practise. The problem is multiplied when we consider that human behaviour may not, after all, be reducible to materialist explanations, and therefore escape causal circumscription entirely. The debate over freewill is far from over; indeed, it is as alive as ever.

This problem possibly wouldn't upset too many people (namely that many people engaged in the field of social scientists have a shaky grasp as to the powers and frailties of the scientific method itself, and produce papers that are riddled with subject bias, observer bias, researcher bias, and the desire to produce something that justifies their tenure and funding), except that as a concomitant to its claims to provide meaningful guidance in social affairs, it also expects to be used- and sometimes succeeds- in driving the engine of policy making in front of it. And that, dear friends, is where people like me come into the equation.

Here are some of the things that are assumed to be axiomatic truths in the contemporary classroom:

1. Lessons should be in three parts
2. Children putting their hands up is bad
3. Red ink will somehow provoke them to become drug dealers and warlords
4. Every lesson must have a clear aim
5. Every lesson must conclude with a recap
6. Every lesson must show clear evidence of progression, in a way that can be observed by a blind man on the moon with a broken telescope.
7. Levelling children's work is better than giving them grades. Grades are Satanic

I could go on, but their aren't enough tears in the world. These are just some of the shackles that teachers are burdened with, dogma with which they must comply. Why? Because someone, somewhere produced a study that 'proved' this. And that proof was taken to be gospel, and then passed down by well-meaning ministers, the vast majority of whom have never stepped in a classroom in a pedagogic manner, unless accompanied by cameras.

So that's where we stand right now; social science being produced by the careless, consumed by the gullible, and transmitted down to the practitioner, who waits at the foot of the well with an umbrella. In this arena, is it any wonder that the teacher has been devolved from respected professional, reliant on judgement, wisdom and experience, to a delivery mechanism, regurgitating the current regime's latest, fashionable values? No wonder teaching is in a bit of a mess right now. We're not expected to be teachers; they want us to be postmen.

In this vacuum of credible knowledge, is it any wonder that teachers feel uncertain, misguided, confused about their roles, about the best way to teach, and troubled by the nagging suspicion that the best ways to teach are staring right at them?

The most certain assertions are those that make the least specific claims, and fit the greatest number of observations and data. These are the principles that teachers should be guided by, and that's why your own professional experience is at least as good a guide as the avalanche of 'best practise' and OfSTED criteria that resulted from the misappropriation of science; and in many cases, your own experience will be better. If you have  years of experience and genuinely reflect on your practice, if your classes are well behaved, the children express enjoyment and the grades are good, then some would say your experiences were merely anecdotal; but I would say they were a necessary part of professional wisdom and judgement.

In fact, I would say they were better.


A priori, the social scientific method is best used as a commentary on human beings and their behaviour, not as a predictive or reductive mechanism. So the next time you read another piece of educational research hitting Breakfast TV, feel free to say, 'Oh really? That's interesting.' But make sure you hold your breath. And get your umbrella and saucepan out.


1. BBC News What Does Spending Show? http://www.bbc.co.uk/news/education-12175480
2: Jim Manzi, http://www.city-journal.org/2010/20_3_social-science.html 
3. http://playthink.wordpress.com/2010/08/03/on-the-limits-of-social-science/
4. http://www-personal.umd.umich.edu/~delittle/Encyclopedia%20entries/philosophy%20of%20social%20science.pdf      

 See? I put references and everything this time. That was so people would take it more seriously. Homoeopaths are really good at this, especially when they're referring to other homoeopaths, quack PhDs and dodgy journals run from the back of someone's health food shop.
  

Comments

  1. I can only write one word about this and I write it with great sadness: "Yes".

    ReplyDelete
  2. The process I have described only becomes a problem when you have an enormous hierarchy which insists that THEIR way is the one correct way to teach, and the one correct way to learn, and all must follow the path of the faithful. Any educational research that claims to be authoritative becomes part of this deprofessionalisation process, because it makes claims to an objective truth in education; that there are right ways of doing things, and wrong ways, and these ways can be discerned through pilot groups and the collation of empirical data. Sorry, but that's a faith position; they can believe it if they want, but they shouldn't trouble teachers with their potty, petty dogmas.

    ReplyDelete
  3. Any educational research - indeed any social, behavioural research - should be obliged by the publishing journal to state how, when and where the Hawthorne effect was accounted for.

    Anyone who thinks that using a specific teaching method or an organisational structure or some other magic bullet is effective must demonstrate that the enthusiasm of the researchers or the participants had no, a little or a measured effect on the results.

    If they've made no attempt to identify, let alone measure, this impact the research results are worthless.

    ReplyDelete

Post a Comment