Another thing I have been pondering as a result of the Thessaloniki meeting of the FLOSScom project is the extent to which the informal learning that takes place in FLOSS communities is mutually exclusive with the demands of formal education. The assumption is that a good deal of learning takes place in FLOSS communities, and often it is the type of situated, social type learning many of us in higher education would love to develop in our practise.
Let us take a small example - one of the partners talked about a course where they had exposed computer students to FLOSS projects. They talked about some of the difficulties of mapping activity in the open source project on to formal assessment procedures within the university. They have toyed with the idea of using some of the metrics which can be seen as proxies or informal indicators within the FLOSS community for an individual's worth or contribution. They gave number of views (in say sourceforge) as an example - one could think of many others, e.g. frequency of posting in a forum, number of times a piece of code is reused, number of accepted contributions to the source code, etc. But the problem is that as soon as you make these the formal criteria by which people are assessed then it influences behaviour. In the case of using something like number of views it might mean students simply keep hitting refresh, whereas with more robust indicators such as accepted bug fixes, what it might do is skew the whole community to fixing bugs (maybe even deliberately introducing bugs so they can be fixed) and away from many of the other tasks that are not formally accredited, e.g. coordination.
I remember watching a football programme once and Jimmy Hill (a well known football pundit in the UK) was discussing how penalties were no way to decide a game. He suggested going on the number of corners accrued in a game. This would benefit the attacking team he proposed and if a cup game was all square at the end the team who had won the most corners would be declared the winner. I was amazed that even as a child I could see the flaw in this argument and this (ahem) expert could not. As soon as you made corners the deciding factor then teams would play to win corners. This would lead to an even duller game than when a team plays for penalties It would give rise to the bizarre scenario of a team nearing the end of the game booting the ball upfield, whacking it against an opponent (making no attempt to score a goal of course) and then running wildly up the pitch celebrating because they have won a corner.
My point is, that like Heisenberg's uncertainty principle, you cannot measure one thing without influencing another. And formal education is obsessed with measuring, scoring and testing so I feel that any attempt to bring informal learning methods into higher education will end up destroying what it was in those methods that made them worthwhile in the first place. Unless that is, higher education itself is changed by the process...
This is a well-known problem with metrics for evaluating software development - you'll find lots written on it if you look in the right places. It doesn't stop a lot of project managers trying though!
Academia isn't so dissimilar - if you make number of papers publshed an important factor in recruitment implicitly as some people do, academics will publish lots of not so good papers rather than one really good one.
Posted by: Juliette | 15/11/2006 at 02:45 PM