"After years of waiting
After years of waiting nothing came
And you realize you're looking,
Looking in the wrong place
I'm a reasonable man
Get off my case"
Packt Like Sardines In A Crush | ||
| Adblock | |
Found at skreemr.com |
The results of the Research Assessment Exercise (RAE) were announced last week. As readers may know, I'm not keen on it, so this isn't an objective view, but I thought I'd explore the motivation behind it, and the problems with it.
I have three main objections to the RAE:
- It is overly complex
- It is expensive
- It is fundamentally flawed
Complexity
What is the justification for the RAE? There are two main reasons put forward:
1) It allows transparent and objective allocation of research funding to universities
2) It provides UK academics with a recognised standard for their research, which is transferrable between universities.
Let's take this second argument first - I guess the idea is that academics can easily move between jobs and have their research recognised. Except, this year the results are so confusing that no-one knows what they mean, and there isn't an individual rating. So from an individual's perspective it's not much use - except maybe to say 'I work in a 4* unit', or 'I want to join a 4* unit'. But as this year's results can be interpreted in a variety of ways, that doesn't mean much anyway. For instance, my own unit is either 3rd, 9th or 24th in the UK, depending on which way you tweak the figures. That's quite some margin of interpretation.
Expense
The main justification is so that the government can easily allocate research funding. Now there is probably something in this, but the system they have produced is so complex that it defeats the object. I asked on Twitter the other day if anyone had done a return on investment analysis for the RAE. For the unit I was involved with (Education), there has been full time staff appointed to it for over a year, a number of high level staff working on it part-time, an administrative and database system created, and plus each academic has to work on their own individual submission.
When we submit bids for research grants we are required to give a full costing of time, plus there is usually 40% overheads added in, and increasingly we are asked to estimate 'opportunity costs', ie what we lose by doing this when we could have been doing something else. For the massive, distributed work of the RAE, which requires often your best researchers to help coordinate, these costs must be considerable, and yet no such return on investment is required. And that is just one unit of assessment - there are many others across the university.
Why? Because as Simon Caulkin in the Guardian puts it:
"the RAE is a potent symbol and vehicle for the bullying top-down managerial culture that has steadily eroded both the quality of working life and results in much of the public sector."
The RAE is an agreed con - it has value because we say it has value. A university has to participate because it would not be able to attract some funding, it wouldn't be able to get new academics if it didn't, and it needs to allow current academics to get a rating. Academics need to participate because it forms part of the agreed career and promotion profile. A few academics might be able to say no, but it's only if universities stop playing the game that threatens the system, and as money is involved, that won't happen. Added into the mix is the academic publishing business – this exists largely because of formal assessment exercises like the RAE. Because the RAE recognises this type of output, academics are forced to publish through this means
But, because it's so expensive to participate in fully, any notion that it creates a level playing field is a nonsense - the better universities can afford to put the administrative effort into it, to release people from teaching to concentrate full time on preparing the narrative, and so on.
Fundamentally flawed
The RAE won’t be the RAE anymore, partly in recognition of the problems above, but there will be some assessment exercise. These first two issues can potentially be addressed by modifications. More damning though is that the whole attempt to quantify research outputs and link these to funding is at its heart, flawed. Here are the main flaws:
The experimenter effect
It is ironic, to say the least, that a research assessment exercise fails to understand a basic of research, namely the experimenter effect. The very act of measuring changes behaviour, and doubly so when it is linked to money. Academics have to play the RAE game – and this inevitably means a focus away from teaching, or even doing research that won’t directly or obviously link into the RAE. John Naughton relates this anecdote:
“In one major academic department I know, the most creative and original member of the department was excluded from the RAE by his colleagues because his pathbreaking work “didn’t fit the narrative”
The categorisation error
I work in educational technology, which was grouped (or lumped might better describe it) in with more general education. Actually there is little in common between the two in terms of what they value, what they deem serious outputs, what are the major research questions, etc. Educational Technology felt very much like a poor cousin and often one had the feeling of trying to twist your research to fit ‘their’ criteria. And this is repeated across many domains. The point is not that they may have the categorisations wrong, but that no categorisation can be correct. This is particularly true of the highly innovative research, which by its very nature won’t fit into a pre-existing category.
Measuring the unmeasurable
The RAE is very New Labour, with their almost Stalinesque obsession with quotas and direct measurement. The problem is that research is rarely like that. One fantastic paper is not worth two good papers. And the more the system tries to accommodate these various factors (e.g. with factors of esteem as it did this time), then the more convoluted and cumbersome it becomes. And if we add in blogging, online activity, creating software, youtube videos, etc then it becomes even more complex. How would we measure Michael Wesch’s output? By his research publications or his YouTube views? Again, there is talk of addressing this next time round, but it will always be chasing the game.
And this leaves academics with an unenviable choice – do they play the game and concentrate on RAE type outputs or do they work on creating new forms of identity and communication, which may be more relevant (not to say interesting), but run the risk of being ‘invisible’ to any official view?
What alternative is there?
So maybe the RAE is inevitable. We need to allocate money to universities for research and as soon as we do this, a complex, unsatisfying system will always follow.
- Don’t allocate research money – instead make it very easy and quick to bid for.
- Allocate a set amount to all universities, based on their agreed research rank (eg from the last RAE), and then have another set which can be ‘won’. This has the advantage of at least allowing universities to know how much they will be receiving, whilst still allowing them some ability to develop.
- Allocate an amount based on some very easy to measure unit – eg number of publications drawn from a database. It’s flawed, but we all know what it is and can get on with the rest of it easily.
- Abandon categories and allow these to flow from the actual tags writers use.
- Universities ignore whatever comes next and instead promote their own research and concentrate on getting funding.
None of these solutions is perfect, but they have the virtue of being cheaper and less pointless than the current system. After all, I’m a reasonable man…
And here is AJ Cann's more succinct take on it: