2 thoughts on “Wonkhe: Grade Inflation: A Clear and Present Danger

  1. I agree that there ia an apparent problem, but I’m not sure the solution is appropriate. As others commented on the article itself, I think a major drive for this has been the inclusion of classifications in league tables, KPIs etc., or at least the reporting of it, creating the incentive to award higher grades at an institutional level as competition for students (and therefore money) increases.

    The mark a student gets at degree level for an individual piece of work (or exam etc.) is compared with a descriptor which is defined and advertised clearly in advance (and, to some extent, dictated by regulators). This is core to almost all degree-level assessment. Since the final mark is a combination of several such marks the degree result is the same. What is also true, however, is that a given module is expected to have “well distributed” results, say with mean score of somewhere between 50-70%. If the mean is outside that range, or oddly distributed, then there is normally discussion as to whether the assessment/module is appropriate. That means that, in principle, you can say that a 1st class student can so X, a 2.1 student can do Y, etc. The current system actually helps with things like employers requiring a certain classification, though this also technically means that there has to be some appreciation of what a 1st looks like from a given institution, as there is variation. Moving to a “fully proportional” system would break that principle.

    I wonder if part of this is due to the fact that teaching methods and technology are changing so quickly. Compared with 10-20 years ago there is, for example, much more acknowledgement that everyone learns in a different way. It is refreshingly less common for lecturers to assume that everyone learns the way they did. It could be that in times past results were relatively poorer because teaching methods were not as flexible, and it just takes time to adjust (which is difficult in a KPI-dominated era). [I have no evidence to hand to back up this hypothesis].

    I also think it’s dangerous to compare universities with, say, schools, which is often tempting. At e.g. A-level, tens of thousands of students sit the same exams, so one would expect fairly “normalised” behaviour (extreme situations aside). At universities, an assessment (not necessarily an exam) can be completed by just a handful of people. The largest assessments might have a few hundred people sitting them, but even then “normalisation” is very difficult on such scales. By the time you’ve got large enough numbers to look statistically, the module has changed and you have to start counting again!

    1. I’m not wedded to that particular solution, but any solution has to be (a) something that is within universities’ power to implement and (b) represent a reasonably stable equilibria against defection. I’d love league tables and employers to stop using ‘good degrees’ but I don’t think that universities can make that happen.

      The trouble with the system you describe is that it’s very vulnerable to a ratchet effect. This is even without the almost half of universities have altered their algorithms recently, leading sometimes to up to a ten percentage point jump in firsts.

      To be clear, I’m not suggesting quotas for individual assessments (I agree unhelpful). In the event of a pledge like the one above, I’d envisage a root-and-branch review of those standards and algorithms to reset them at a level which would consistently deliver results within the proportion (so if we said no more than 25% firsts (or starred firsts!) you’d set it to expect a typical year would give you 20%, to allow for natural fluctuation). Individual examiners could then continue marking as they do now against the higher standards.

Comments are closed.

Comments are closed.