Well, my previous post on data for MOOC completion rates caused a bit of a kerfuffle on Twitter. It was interpreted by some as saying "ONLY completion rates matter". And also of not taking into account other factors such as what learners who don't complete get from a MOOC. That seems rather like criticising Alien for not being a rom-com to my mind - they're doing different things. This research was showing one aspect with the quantitative data available. It is part of a bigger picture which ethnographic studies, surveys and more data analysis will complete. It wasn't attempting to be the full stop on MOOC research.
Anyway, here is another graph that Katy created, showing attrition rates of active users (those that come into the course and do something, not just those who complete assessments) across disciplines:
That's a pretty consistent pattern. If we saw it nature we'd give it some name like "The MOOC attrition law". My interest is as a course designer, so given that the drop-off pattern seems fairly robust, what does it mean for design? (Doug will have issues about the power-lawness of this)
I think there are two responses (but maybe you can think of more).
Design for retention
The first is to say that completion is a desired metric. There may be courses where you really do want as many people as possible to complete. Imagine you were running a remedial maths course for instance, then it won't help your learners much if they only cover a third of the subject matter they need for whatever purpose (Bridge2 Success got learners through maths so they could get onto an employment program, so completion was very important here).
In this case you need to address the 'problem' of drop-out, because it is a problem for you. There might be a number of ways you do this: by adding in more feedback, using badges to motivate people, creating support structures, supplementing with face to face study groups, breaking your longer course into shorter ones, etc. The point is that you design in features that aim to improve completion.
Design for selection
The second design approach is to say that completion isn't an important metric. Here you accept the MOOC attrition law and design the experience with that in mind. I have some sympathy with Stephen Downes when he says no-one finishes a newspaper but we don't talk about people 'dropping out' of a newspaper (I've heard him say this but can't find a link - anyone?). So even to talk about drop-out is to map the wrong metaphor to MOOCs. His analogy breaks down a bit however because not all readers drop out at page 7 of a newspaper, they dip into different sections. People tend to drop out of MOOCs by week 3. It's not as if they're coming in and doing a bit from week 7 and a bit from week 5, and then leaving. They're simply not getting to those later weeks. And even if you are of the 'completion doesn't matter' camp, I'm sure most course designers don't think the content in week 5 is half the value of that in week 1.
So, in this design approach you might break away from the linear course model, to allow people to do the 'newspaper' type selection. A course might be structured around themes for instance, and each one around largely independent activities (I tried to design H817open a bit like this). So in this case completion really doesn't matter, learners take the bits they want.
In both cases I would suggest that the completion rate data is useful for you. In the first case you know what type of completion rate to expect, and in the second one it drives you to be more innovative in design approach. And that's the point about the research - it helps inform decisions.
By the way - this is my fifth blog post in 5 days. Just in case Jim Groom berates me for not blogging often enough...
Hi Martin,
I like the idea of breaking the linear course model. It's something we discussed about 6 years ago when considering our online modules in the Faculty of Health at Edge Hill Uni. I think when you're relying on a social element to learning (constructivism, connectivism, etc) then you need to be sure you'll have enough numbers dipping into the different areas at the same time - something we couldn't be sure of with a module with 20-30 participants and 10 units of learning, but something entirely possible with the registrants in MOOCs.
I'd like to see that implemented actually, and come to think of it, may suggest it as an experiment in one of Liverpool's FL MOOCs. Some MOOCs make all content open from the beginning (e.g. SNA on coursera) but not necessarily intended to be studied non-linearly. I think it would require an academic/SME to think a little differently. Often there'll be at least one section that is prerequisite to another, so that might designing around further still.
Cheers
P
@reedyreedles
Posted by: Reedyreedles | 13/12/2013 at 06:35 PM
Martin - thanks for this and for your previous post with the completion rate charts. I think that if completion matters then plenty of "up front" clarity about what a particular MOOC will involve will help. In the olden days (late 1990s) I discovered when responsible for the early runs of the non-MOOC Learning to Teach On-Line (http://www.online.sheffcol.ac.uk/index.cfm?ParentID=7f6d8400-59f1-45ae-b10d-03b0b3f97d8b) that having a clear "pre-course assessment" process allowed prospective learners to check properly what the course would involve, and to confirm before signing up that they knew what they'd be letting themselves in for; and were "up for it". Completion rates increased from say 40% to 75% once we'd introduced this stage into the enrolment process. This is the approach that we are taking in the design of a MOOC that I am currently working on called Citizens' Maths (http://citizensmaths.com/) We'll not be able to prevent people just joining - nor would we want to; but we will be encouraging them to engage with a pre-course process as part of signing up.
Posted by: sschmoller | 15/12/2013 at 05:41 PM
hi and thanks for posting that. I think the idea of making all content available from day one, as you suggest, is a good one. I had this in a recent MOOC, and in that case, I think a better measure would be not if learners complete assessments, or which week they drop out, but how much of the content they engage with (e.g. links followed, videos watched or downloaded, degree of posting on social media and discussions). I have also registered for a MOOC where the free e-book is made available even before the MOOC starts. Right there, they have engaged new learners at the point of registration when they are interested, because many of us register for a MOOC, then weeks or months later when it starts, are too busy with other things. But having that book means, if I start reading it, I might get hooked and be more likely to participate in the MOOC itself to take the ideas further (or that's my expectation for why they did it, anyway). If nothing else, the course designers have disseminated an e-book they co-authored, even if folks never get into the MOOC itself. And of course there are the MOOCs where the excitement builds on Twitter and/or facebook before the MOOC ever starts...
Posted by: Bali_Maha | 15/12/2013 at 05:58 PM
@Seb - what you say is interesting. We were talking about building diagnostic tools for potential OU students recently and I was making the case that perhaps the best diagnostic was the material itself, ie here are the first 3 weeks of the course, what do you think? So in this respect building pre-course assessment for MOOCs seems a bit self-defeating, the openness of the course fulfills that function. But you may be right, if people sign up for a MOOC and drop out that can be damaging to their confidence. Also by merely getting them to do the pre-course material may increase commitment to the course.
@Maha - hi, yes, some MOOCs make everything available on day 1, it would be good to see if this has any effect on completion, or on satisfaction rates. It certainly allows you to dip in more, but isn't quite the same as designing for someone to take any piece in any order, it will probably still be a linear sequence. The book angle is interesting, you're right maybe it plants a seed that germinates later.
Posted by: mweller | 17/12/2013 at 02:51 PM
There are alternatives to linear design that also may stimulate completion. One that I favor is called CPOM - Core Plus Option Modules. The design is simple enough. The price of further admission is completing the first 'core' module. Then you get to select from a number of options. This is even a bit like a newspaper. Just about everyone, even the astrology addicts, scans the front page. Then they turn to whatever floats their boats - sports, comics, global news, stocks, whatever.
So in a CPOM design on any topic, the "front page" is a module that introduces some basic background knowledge, terminology, indications of the cognitive map of the topic. Then there would be some number of option modules -each lasting e.g. one or two weeks.
Course completion would consist in finishing the core plus some small number of option modules - lets say 3. Further, for those not completing the core, we can even chose to disregard them in our retention statistics, on the ground that we never had them. In my almost 50 years of university teaching I never included the kids who dropped out in the first week in my final stats. By mid term they were nowhere to be found on any of my course printouts.
We could even have CPOM exams. Everybody takes section 1 (core) and then selects the other sections based on which modules they select to be tested on.
Posted by: Ljwaks | 19/12/2013 at 11:53 PM
The CPOM model is a good one. What you say about not including those who don't complete the core in retention statistics is exactly the subject of my next post. I'm not sure CPOM would work for all subjects eg for stats there are just things they have to know. Also, students aren't always in the best position to know what it is they need to know. But I like the approach.
Posted by: mweller | 20/12/2013 at 07:16 AM
Hi Martin,
There is some really interesting data coming out of this site and Katy's. We have been finalising the statistics from a MOOC that we ran in July last year for 11 weeks and had some surprising results that we believe are due to considerations in design, online support and having a cohort-centric approach. We have just had a correspondence piece published in Nature: http://www.nature.com/nature/journal/v505/n7481/full/505026a.html and data about registrants and completion rates can be found here: http://www.utas.edu.au/wicking/wca/mooc/data I was wondering if someone is updating the data on Katy's site whilst she is on leave, it would be wonderful if we could include our data as well.
Posted by: Netty Gibson | 02/01/2014 at 10:58 PM