I attended this conference with Tharindu. There was a lot of familiar discussion around the rationale for online marking over paper marking and the key themes that came out of the presentations and discussions, e.g.:
- better student experience
- availability of learning analytics & data yield
- marking rubrics enabling consistency of feedback
- technology making feeding back easier
- discrepancies between how students learn (using online content, digital libraries, etc.) and then being tested on paper;
- handwriting being ‘a dying art’
- increased efficiencies/sustainability
- political motivations.
Presenters included Dr Rachel Maxwell from the University of Nottingham (UoN), who spoke about the pedagogical transformation around marking that had taken place at the institution. UoN used an ‘assessment & feedback matters’ focus group to drive through the change based on JISC’s ‘assessment and feedback lifecycle’. The group wanted to design ‘future-focussed assessment that would be marked explicitly against LOs’ (i.e. rubrics are optional at UoN and feedback is provided against the LOs). Their priority was to design ‘pedagogically valid’ marking, and then, secondly, to find the technology to deliver it.
Rachel told the conference that a common complaint from students was that they didn’t understand why they had to wait so long for their results and she showed a fantastic graphic created by UoN students to help explain to their peers what is involved in the process and why assessment marking took so long. I really like the idea of students being involved in finding solutions on their learning journey – this echoes the type of active learning and problem-solving skills we want to enable for our students at UCEM, that they can apply both to their modules and ‘real world’ issues.
Source: UCEM (2018) ‘ ‘How we assess your work’ graphic by University of Northampton students’, shown by Dr Rachel Maxwell of UoN at University of Reading ‘Remaking marking’ conference, 4 September 2018 [photograph]. Reading: UCEM.
This theme of students contributing to solutions came up again following a conversation around using rubrics and/or marking to learning outcomes, enabling consistent marking at faculty level. This led on to a discussion around how students engage with rubrics – what’s useful to them – what does good rubric look like? – and the need for research in this area. Dr Emma Mayhew of University of Reading (UoR) talked about research by the educational psychologist Dr Naomi Winston (University of Surrey) which showed that whilst students could broadly define words such as ‘consistency’ they then had trouble translating what these terms meant in the context of their assessment. University of Surrey therefore worked with students to help them develop their own glossary of assessment terms.
Something I found really surprising (in my innocence!) was that there also seemed to be a consensus among lecturers in the room that students don’t read their feedback, or don’t act on it, which led to a discussion around the merits of getting students to feed back on their feedback before marks are released to them. I wonder, do we need to ‘educate’ students more around their responsibilities as students; and/or work with them to define their responsibilities?
Emma Mayhew also mentioned an interactive clickable rubric resource that UoR are currently developing as a time-saving device. She also talked about audio feedback, which she uses, and which she finds saves her time (unlike video feedback which she found was more time-consuming). She reported that students responded well to a ‘warm’ human voice and found it reassuring, but she pointed out there was training to be done around ensuring tutors use the right tone of voice if they are using audio feedback.
Dr Simon Kent from Brunel University said that in their experience students reported liking audio feedback for practical tasks but not for written content. It was also reported that students felt that quick marks were useful for non-disciplinary content, e.g. referencing, but that it was not so welcome for essay-type answers.
Simon also spoke about the ‘Bring your own device’ for exams scheme using locked-down browsers at Brunel. He said that he had initially thought it would revolutionise assessment and marking but had sadly come to realise that in reality BOYD means we have come only a short distance over a long period of time. He said that BYOD works on traditional written questions and MCQs and that webcams are used for diagrams/artwork. He said there was no evidence of any increase in cheating and that no group is disadvantaged by digital exams, though there is some evidence that mature students returning to study after a break are less comfortable with the concept. Students with a disability were not considered, which gives rise to the question about diagnostics testing to check students are able to use the technology before they can access it, and the need to provide alternatives if not. Also, embedding skills development and support into the learning.
Dr Pete Lonsdale from Keele University talked about how he had mapped out the whole assessment and feedback process and had produced software that tracked and enabled assessment through its many stages including submission and feedback. He talked about how involved the process and design had been and how it fell over at several stages because the finer details had been missed. He also discussed the need for an online policy that includes many eventualities, e.g. protocol if students send in a virus-ridden file or if a file is misplaced.
It was an interesting to hear how other institutions are encountering similar problems/solutions to us, and particularly to learn about the examples where students had been directly involved in resolving issues.
Development editors and quality manager at UCEM