Chris Rust on assessment and feedback
Duration: 10 minutes : 34 seconds
Chris Rust (External Consultant).
So, assessment, crucially important, a crucial role in the learning process. That's very clear from the literature, but you again don't need to travel very far or, you know, go too many Times Highers without finding criticisms of our practice.
If you work your way back through QAA subject reviews, the single biggest criticism over and over again is around assessment and assessment practices, the speed of feedback, the quality of feedback, etc. And that, as has already been mentioned, has more recently been mirrored in the National Student Survey.
But if you are working in your departments to try and create assessment strategies, if you do want to try and look at your practice, I suggest this may be a useful checklist.
Firstly, it seems to me, we should be trying to ensure we've got constructive alignment, Biggs' phrase. I'm sure many of you know what that means, but just in case you don't, this is not a hugely complex idea. What Biggs means by constructive alignment is that, when you design your course, you should have clear learning outcomes. You should then make sure that your teaching methods clearly, explicitly, logically, are the best possible ways that you can think of to take your students to those outcomes, and your assessment, the key for this morning, your assessment should focus on whether the outcomes have been achieved or not. This does not seem, to me, to be rocket science.
So ensuring it addresses that issue of validity I brought up earlier, ensuring our assessment processes truly assess the learning outcomes of the course.
Clearly, we should have explicit assessment criteria. From the literature around deep and surface approaches to learning, comes this argument, that if you want your students to take a deep approach to their learning, you should try and avoid, quotes, "Threatening and anxiety provoking assessment strategies."
We need, I would suggest, to make sure we are engendering, as much as we can, intrinsic motivation, and it seems to me that links to ideas about relevance and purpose. Trying to give the students activities, as part of the assessment, which they can see why they are doing. "Why would I ever need to know this? Why would I ever do this? When I have left university, what possible use is this to me?" And maybe, where possible, giving students choices. We know people are more motivated if they can do the things that they, themselves, have chosen.
We need to pace student learning, I would suggest. In particular, this is even more important, it seems to me, with the widening participation agenda.
And linked to that, we need to structure their skills development, to make sure that our assessments, in particular in the first year, are identifying the skills we want to develop, clearly helping support their development, and assessing them, and giving them feedback on it.
And Mantz Yorke's phrase, again linked to widening participation in particular, we need to find ways of allowing for slow learning and early failure.
If we believe that, whatever we're teaching students need to be engaged, they need to actively engage, they need to construct understanding for themselves, we don't believe in transmission models, and so on, if that's what we believe about learning in general, why do we think trying to help students understand assessment, and assessment criteria, should be any different? Because up to this point, we've been working with trying to make criteria ever more explicit; we've been trying to help students understand criteria. So we suddenly said, "Well surely, we need to take a social constructivist approach to this, just like we were teaching anything else." So we said, "Okay, what would a social constructivist view of the assessment process be? What would that look like?"
So what we said was, "It's no good just having explicit criteria." We'd already moved to this point from some of our work. "What we need to do, is we need to get the students actively to engage with the criteria. It's no good just giving them the criteria and expecting them to understand these words, like analysis and evaluation."
One way that we found worked very effectively in our Business School, of getting the students marking pieces of work, using criteria. And we've shown very clearly that those students, through that process, learn what those criteria mean, and go on to produce better work as a result.
We've said repeatedly this morning, how important feedback is, and in that list from Gibbs & Simpson, there's the whole issue about, how do you get students to engage with feedback? If we're going to make feedback work, not only do we need to make sure it's prompt, and written in a way they can understand, etc, we need to get them to engage with it.
Tom Angelo, I thought this was absolutely brilliant, he said, "Well, if you want feedback to work, there's one golden rule." And he said "This is exactly the same as for your murder detective,"
So firstly, you've got to give them a motive. You've got to give them a reason. "Why would I want to read this feedback? Why do I believe this is going to be any help to me? Am I ever going to have to do this thing again?"
And I suggest this leads you to the notion that we should do far more work around first drafts, second drafts, rewriting, redoing exercises, than we currently do, but you need to give them some motivation for addressing it.
Secondly, you've got to give them an opportunity. "When am I going to have a chance to put this into practice?" And finally, you've got to give them the means. It's no good telling them their analysis isn't good enough if we don't help them understand, "What would good analysis look like?" What do they need to do to make it good?
So I'll stop there, and I think that is exactly 50 minutes, and I'll leave ten minutes for questions.
So how do you go around that problem?