Crystal Bollocks 2: A response song
'An empire will rise in the East...Danny Adams in 11F will get a B...' |
Remember Response Songs? They used to be big in the charts, when popular songs would be replied to; I believe the hippety-hoppers still do it quite a lot. I had my own experience in the form of this blog (click here) by Mike Herrity, who has apparently been chewing on the issues raised in my recent blog on Value Added, and target setting in general
. He did so in a such polite and thoughtful way; I must say, I wish that more
people could hold a dialogue about education like this. A lot of the comments I
get remind me of the sort of responses underneath online articles in the Daily
Mail (‘time to end the great democratic experiment’ / ‘boil in the filth of
your own excrement’ etc). So I thought I would do the same.
The first thing is to admit where I might have been clearer:
1. I conflated Contextual Value Added with Value Added.
Value Added is a more general measure of absolute progress; contextual value
added is the same, but taking into account the social and economic indicators
I mentioned previously. Now there are significant differences
between these of course- but they both still rely on the same principles of
comparison, and estimation of average outcome:
From the Dfe itself, a reminder that CVA and VA share most
of the same DNA:
‘CVA is not very different from simple VA. The basic
principle of measuring progress from the KS2 test to qualifications attained at
KS4 remains the same. However, a number of other factors which are outside a school's
control, such as gender, special educational needs, movement between schools,
and family circumstances, are also known to affect pupils' performance. CVA
therefore goes a step further than simple VA by taking these factors into
account and thus gives a much fairer measure of the effectiveness of a school.’
'CVA is dead! Long live VA!' |
See? We’re not so different, you and I, Austin Powers. So
while I was playing hard and fast with the terms, I think my substantive point
about its prognosticative powers remains, ie that it is a statistical estimate,
and entirely devoid of predictive powers. As Mike points out, I am aware that neither CVA nor VA is intended to be used in a predictive way, not by the FFT, not by any serious statistician involved in its production.
But that's not the point: the point is that this is how the measures are used in schools. Believe me, most schools I work with DO use CVA and VA as methods for retrospectively evaluating school, department and classroom teacher performance. And they shouldn't. But they do. It's no good defending CVA/ VA by saying, 'ah but that's not what they're supposed to be used for.' They ARE used for that purpose. When Ofsted, or any other interested party comes in for an inspection tango, they don't look at your CVA/ VA and go, 'Let's just ignore this, shall we?' They get stuck into it, boots first, as a method of evaluating the school's performance. And teachers are called out on it. A mother might leave a credit card in the hands of a teenager 'for emergencies', but if the card is regularly maxed out in Top Shop and Nando's, then you'd think again about the strategy, just like I'd think again about how such data is disseminated.
2. CVA has been phased out, in response to the School White
Paper, The Importance of Teaching, November 2010. Value Added is the current,
new measure being reverted to. I hold my hands up to this one- serves me right
for not keeping abreast, and it took a few kind people on Twitter to point it
out. But like I say, Value Added still retains the intrinsic problems shared by
CVA:
From the TES:
'The...review of the English exams
system, conducted by Sir Richard Sykes and published this week, attacks the
“implied precision” of CVA as “spurious”, which it says makes the measure
“unfair” to schools, teachers, pupils and parents.
The review says it should be abandoned, along with value
added, unless the “underlying validity to their methodology” can be proved. A
leading academic said this week that it could take five years to develop a
replacement measure....Professor Stephen Gorard, from Birmingham University, has
warned about the dangers of the measure before it was even introduced. He said
there was so much missing data and “measurement error” that the end result was
a “nonsense”. “If the use of CVA continues I think there will come a time
when it results in court a case,” he said.’
I don’t know Professor Gorard, but I likes the cut of his
jib.
Other points:
No, I don't use wikipedia as a source to justify my arguments! In the case he refers to I thought the Tweeter was from abroad, and might just need direction to a brief explanation of the term. Mea culpa, mea maxima culpa, but like I say, it has no direct impact on the problems of CVA or VA.
He is so wise. |
'What am FFT?' That was my darling writing style, not an error. See: Bizarro, Mongo from Blazing Saddles, Rudolf Steiner, et al. Oh, and I never implied that they are, like, 'the man', oppressing schools with their heavy data. They do what they do. They are entirely, I am sure, without spot or blemish. It's what happens to that data after it leaves their hands that I object to. They can sell it or give it away for all I care.
Mike Treadaway, Director of the FFT is 'horrified' at how schools use the estimates, you say? So am I! I don't blame him. But what people often fail to appreciate is that as soon as you issue estimates of what pupil x is statistically capable of achieving, using their peer group as a base line, then you ARE in the business of implying that a child should be reaching level Y. Trust me, if a pupil goes below the FFT estimate, you better believe that schools, parents and LEAs get on the blame bus. It's no good, saying, like Mr Spock, 'Ah, this child has defied the FFT estimate. How interesting.' No: it's clobbering time. Performance Management, SEFs (RIP), inspections, all lean on FFT data as gospel. This is how it's used in schools. That's why I think we need to cut up the credit cards.
One of my main problems with the concept of value added is
that it’s a concept lifted directly from the market place: the difference
between the sale price and the production cost per unit. It also refers to a
feature of a product that goes beyond the standard expectation, like a car with
a holder for a mug. It describes how a product increases in value as it goes
along a production line.
What some schools see when they get data. |
Now aren’t THEY lovely metaphors to use to describe the
education of children? The answer’s no, incidentally. I mean, I get it, I
understand what it’s trying to say, and there’s certainly some good in it- we
all want to ‘add value’ to children- but the danger of metaphors lies in over-identifying
the signifier with the signified. Education isn’t a commodity; a child isn’t a
product. There are fundamental conceptual differences, and if someone doesn’t
grasp that then they shouldn’t be in education.
An A isn't the target I set my children. It's the target I set for myself. The target I set for my children is that they try their damnedest, every time. Sure, for some kids that will mean I task them up differently, to meet their different needs. But the day I tell a kid I expect anything less than the best from them is the day I hang up my cardigan. They can get an A or they can get a D; as long as they- we both- tried our best. Run, Forrest, run!
The use of data in teaching is an interesting conversation; but the bean-counters have had their way for too long, and we have increasingly found that we now have a generation of teachers and school leaders who a) believe FFT data is predictive and b) have forgotten how to predict grades or set their own targets with confidence. It has been removed from the hands of the experts- the teachers, by people who are obsessed with accountability and market based models of education.
The problem is that schools aren't factories where ball-bearings are made. And a large part of what we do defies the spreadsheet. Children don't learn in smooth, incremental gradients. They stall; they reverse. They leap forward; sometimes years later. We don't deliver parcelled units of knowledge or learning; we teach; they learn. The process is abstract, intangible at times, and often maddeningly defiant of metrification.
Just like people.
I pretty much agree. It is overused by people who have what could be charitably described as having an 'interesting' take on how statistics are used. I suspect the defence of FFT data by schools would be along similar lines to publication of National school data; 'We simply publish raw data, if newspapers want to use that to compile league tables then that's their business'. A bit like Fox News 'We report, you decide', really.
ReplyDeleteThere is a problem though that there are some teachers who inhabit the lower end of Dunning-Kruger, and can have a distorted view of kids' potential (sorry, couldn't think of a better word), and in this case data can be very useful. For new teachers who don't yet have the skills to pay the teacher assessment bills, having the background numbers can also be very helpful when trying to judge a class.
As someone with a reasonable understanding of the stats, I'm comfortable working with FFT as I will make a judgement on what to use and what to ignore, but I imagine it is very off-putting for some people, particularly given the institutionalised denigration of maths in culture.
I wonder how much of it is related to current fashion for attatching numbers to everything to try and rub off some mathematical credibility.
Hi Tom... enjoy your blog immensely. Don't know if you have come across this little gem: http://www.youtube.com/watch?v=1qQL5L31-1E
ReplyDeleteUsing FFT estimates as targets is beyond stupid. What they show is that a group of children with the same KS2 scores will be normally distributed in their achievement at GCSE. Let's say the high point of the bell curve is grade C, that typically means around 33% percent of those pupils got grade C, 33% got higher grades and 33% got lower ones (even in the top 25% of schools!).
ReplyDeleteIn most schools all of those children will have targets of a C. That means all the intervention/effort/recrimination is focused on those getting D or below even though such progress is common in the best schools. Meanwhile, some children who are judged to be 'on target' have the potential to do better. FFT publish these 'spreads' of achievement on their website, so if you're a teacher you can check this for yourself.
If your school leaders use FFT estimates as targets, give them an F because their tracking system is actively hampering your efforts as a teacher to ensure all children fulfill their potential.