<Image from gringer https://www.flickr.com/photos/gringer/5096129532/>
As a senior member of staff (I know, how did that happen?), part of my duties currently include reading papers for the UK research assessment exercise, the REF. I've moaned about how this reinforces traditional publication models previously. This is a more general moan.
I think the REF has muddled objectives. It aims to make academics accountable to some extent, but primarily acts as a means of allocating research funds. It may have some secondary aims, or you could classify these as indirect consequences, such as reinforcing the position of the main publishers and maintaining the status of Russell Group universities.
When you have multiple objectives the resultant process is often unsatisfactory. So if we ignore the secondary aims (although I wish the Finch report had been brave enough to make clear reference to the REF in undermining the move to open access), then how might the first two be best served?
Let's assume for now that you want to go along with accountability drive. There are of course plenty of reasons why this may not be a good idea, but that's for a different post. As someone who has to read papers and try and judge them, the real problem with the REF is that it brings in quality judgements. I thought that educational technology was a special case in that it didn't fit well into a broader education category and so the people making assessments weren't sufficiently well acquainted with the subject area to make decent judgements. But I've spoken to people in other disciplines and everyone feels the same. The specialisation of research means that no categorisation can be fine enough to really capture it. And it's a fruitless task anyway, who can judge the impact or quality of a paper? I think you can spot a poor paper easily, and maybe a really excellent one, but 90% of the rest sit in the middle so judging whether something is a 2* or 3* is essentially a coin toss.
So, my suggestion (if you want to do the accountability thing) is to simplify it and just apply a simple gateway threshold. So, for example you could take the REF guidelines - in four years someone who has a research part to their contract should be expected to publish four refereed articles, have got one piece of research funding in and supervised one student. Against this simple criteria you can then have accepted trade-offs: running large university projects, digital scholarship equivalents, personal consultancy as funding, etc. But you remove the quality judgement, it's a simple measure that this is what a research active should be expected to deliver. I don't necessarily agree with it, but we'd all know what the rules were.
The next element is to allocate research funding. You could simply do away with direct funding and have it all come through research councils, as they do in some countries. Or you could distribute it evenly, or according to number of staff who've passed the threshold above. But I've got a more interesting suggestion - there was some maths research (from those crazy Italians) recently that argued that random selection of some, but not all, politicians would benefit democracy:
"the introduction of a variable percentage of randomly selected independent legislators can increase the global eﬃciency of a Legislature, in terms of both the number of laws passed and the average social welfare obtained."
I wonder if the same might apply to research funding? So some is allocated according to worth (but using a simple system), but a percentage is allocated randomly, on the proviso it is used for research. This might well generate innovative research we wouldn't see otherwise. And it prevents a self-reinforcing group who get caught in version of group think. And it'd be more fun.
I know what you're thinking - why isn't he in charge of the next REF? We can only shake our heads.