Noah Carl’s piece today for the Daily Sceptic describes, accurately, the widespread phenomenon of grade inflation at universities – both in the U.S. and the U.K. He reasons, drawing on the work of economist Stuart Rojstaczer, that this is attributable to a “consumer-based approach to teaching” in which academic pay and promotion are linked to student-based course evaluations. As Carl puts it:
Basically: if they’re too stingy with their grades, they’ll receive lousy evaluations, and in addition to the stress of dealing with irate students, they’ll be less likely to advance in their careers.
Now, I don’t know about the situation at American universities. But as regards British ones, this explanation is dead wrong – for two simple reasons, which are themselves highly instructive as regards the state of higher education in the U.K.
The first reason is that academic promotions – certainly from the middle of the university league tables upwards – are almost exclusively based, not on teaching quality, but on research quality. Your average Russell Group Vice-Chancellor couldn’t give two hoots about what happens in the classroom; come rain or shine, they are guaranteed to get all the bums on seats that they need in order to keep the show on the road in terms of student fees. VC pay is linked to performance, and performance generally means improvements in league table rankings, which brings prestige and (usually) an increase in the number of high-paying international students. What improves league table positions? Well it’s down to a range of factors, but the only one that directly relates to individual staff performance is the quality of their research. (Something called ‘teaching quality’ is also often included, but this, importantly, does not actually mean teaching quality in a way a lay person would understand it – more on that below.) Naturally, this makes research quality the only really relevant metric in who gets promoted and who doesn’t – although some universities do have promotional pathways for staff who just want to be good teachers, mainly to keep those staff members happy.
The idea, then, that staff are concerned about student evaluations is laughable. The only effort that most research-active academics at U.K. universities put into their teaching consists of finding ways to avoid having to spend time in the classroom – and this is almost entirely the product of how they are incentivised.
The second reason the Rojstaczer/Carl hypothesis is wrong is that, for related reasons to those discussed above, student evaluations generally take place way before students actually get their grades. Students evaluate course content typically in the last session of the semester, and get their exam results months later. Similarly, they fill in the National Student Survey (their single opportunity to assess their university experience in a neutral forum) roughly in the middle of their final year of study – i.e., months before they get their final degree classifications. The idea that staff are worried about student evaluations when they mark exams simply misunderstands the process – at least as far as common practice in the U.K. goes.
What are the real reasons for grade inflation, then? The main one is, in my experience, cultural. There is a kind of mother hen syndrome at work amongst many academic staff – a feeling that one should err on the side of generosity in all things when dealing with students. This is part of a wider cultural malaise, wherein the pursuit of excellence is in itself looked down upon as somehow harsh, patriarchal or ‘toxic’. I have been in staff sessions in which the opinion has been widely aired that it would be a good thing if all students got firsts – a concept that completely misunderstands the purpose of having exams, but which is indicative of the prevailing mood amongst a big cross-section of the academic profession. If one doesn’t particularly care about the pursuit of excellence, or indeed if one thinks it to be ‘problematic’, then the idea that a small number of excellent students should be set apart from their fellows as high-performers is an anathema. All must have prizes!
A subsidiary reason, though, is structural, and here we come back to league tables. Compilers of league tables can’t go into university classrooms and observe teaching. They therefore can’t actually assess ‘teaching quality’. But they do seem to feel as though teaching quality ought to be relevant in their rankings. So, what are some easy, rough-and-ready proxies for teaching quality? One is something called ‘continuation’, meaning the percentage of students who progress from one year to the next. If students get bad marks then they tend not to continue – or indeed are unable to continue if they fail. What if students get better marks, then? That’s one way in which universities benefit from grade inflation right there. Another proxy measure for teaching quality used in league table rankings is ‘graduate prospects’, meaning the number of graduates who get jobs or are in further study after graduating. What increases the likelihood that a graduate will get a job after graduating, or go on to postgraduate study? A good degree classification will certainly help. So there’s another way in which universities benefit from the grade inflation game.
Grade inflation, in other words, partly results from the weird obsession with driving down standards which is evident in every aspect of our culture. But it is also strongly linked to the desire of university VCs to ascend league tables by gaming the statistics upon which league tables are compiled. And, sure enough, most universities have over the past 10-20 years deliberately incentivised both higher grades (by dumbing down marking criteria) and higher degree classifications (through wheezes such as allowing students to disregard the worst-marked module in their final year when the overall classification is calculated) – with the result that employers now genuinely struggle to know whether somebody they are considering hiring, who has a first class degree, is any good or not.
You might draw the conclusion from this that university league tables are a really stupid and pernicious idea, and that maybe universities would survive perfectly happily without them, much as supermarkets, clothes shops, online retailers and driving schools somehow manage to struggle along and provide a decent service without being comprehensively ranked by the Guardian or the Times every 12 months. And you would be correct to do so.
Busqueros is a pseudonym.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
” a concept that completely misunderstands the purpose of having exams”
Lol – highly “intelligent” and “educated” people who fail to understand the very thing they are there to do. As a civilisation, we do seem to be well and truly banjaxed.
Thanks to the author and to DS for this first-hand insight.
Not everyone can be a brain surgeon!
Merit is what drives intellectual progress, and on rare occasions, savantisum (Einstein and the like) but hard work and endeavour powers progress not a free ride!
Thanks for the response. You may be right that Rojstaczer’s explanation does not apply to Britain, even if it does apply to the US. Incidentally, I don’t think the observation that “student evaluations generally take place way before students actually get their grades” is as fatal for the explanation as you suggest. Students sit preliminary exams and get predicted grades, which correlate strongly with their final marks. Hence many already know they stand a good chance of getting a first when they submit their evaluations.
I don’t think it’s quality of research that determines a university’s position on a league table. It’s getting papers published in prestigious journals that counts. This should be an indication of the quality of the research, but in reality publication often depends on the conclusions that a paper reaches. An obvious example is climate science where a paper can be a load of tosh but if it supports the alarmist agenda it will get published but if it’s “sceptical” it’s likely to be rejected even if it’s a brilliant piece of research. I’m sure the same applies in most of the social sciences i.e. if a paper supports the woke agenda and makes frequent use of phrases such as “systemic racism”, “post colonialism”, “toxic masculinity” etc. it’ll be published but gender critical papers or those that don’t agree with critical race theory, for example, will be rejected.
As someone who was a TA in grad school many years ago in the USA, I can tell you without a doubt that at least on my side of the pond, student-based evaluations of teacher are a BIG contributor to the problem of grade inflation. Maybe not the only factor, but a significant one nonetheless. It turns it into a popularity contest, basically.
To reverse grade inflation, it’s not gonna be easy, but three things must be done:
1) Abolish the evaluations, yesterday.
2) Put a mandatory “sinking lid” on the percentage of students of each course who can be given “A” grades, gradually reducing it each year or semester until only the top 10-20% can get “A” grades.
3) Bring back “weed out” courses, to restore at least some semblance of rigor.
Problem solved. But that would make too much sense, of course.
I used to teach on what was supposed be a “weed out” course as you call it. We were proud of our standards and the achievements of our students who passed.
Unfortunately when the “bums on seats” approach to funding the education of said students (who became known as “learners”) came into being the standards plummeted as the all-shall-have-prizes philosophy took over. We had no say in what was happening as re-evaluation of students (being allowed to resit assessments until they passed) took place external to our department and we were initially astonished by the appearance certain students/learners in a more advanced course after the summer break. We learned that our standards were not respected.
I resigned…
Step 1: we don’t have enough of group X in higher education
Step 2: But not enough of X pass the entrance exam.
Step 3: it couldn’t possibly be because they don’t have the aptitude
Step 4: the entrance exam is measuring the wrong thing, let’s abolish it
Step 5: Not only X but almost everybody who we admitted aren’t doing well
Step 6: Make the course easy enough for everybody to do ok
It would be better to have a quota for X rather than relax standards across the board, as the latter approach would reduce the quality of the entire student body.
Be careful what you wish for. Quotas cause their own set of problems as well.
And I wortked so hard in the 70’s to get a STEM first – where do I claim my reparations?