Yesterday we (Marieke Guy and Tharindu Liyangunawardena) facilitated a session at the ALT annual conference on ‘Making your mind up: Formalising the evaluation of learning technologies’.
ALT session
The session began by taking a look at the impact on institutions of the mass pivot from face-to-face teaching to online due to covid. Institutions have seen an increased need for greater investment in the digital campus (as noted by the OFS Gravity Assist report) which has resulted in a significant amount of technology procurement decision. However anyone who works in the learning technology area knows that technology purchase decisions aren’t straightforward. They are often impacted by pressure from above (e.g. senior leadership, budget, culture etc.), pressure from below (e.g. resource, time, users etc. ) and pressure from the side (e.g. security implications, the wider world, accessibility regulations, government regulations, Office For Students expectations etc.).
During the session we looked at how in order to support decision making it may prove useful to refer to existing frameworks including the following:
- Educause – The Rubric for E-Learning Tool Evaluation
- The technology acceptance model
- Jacob Nielson usability testing
- Tony Bates SECTIONS
Change management approaches can also help us with decision making and implementation, and also with understanding potential reasons for failure:
- ADKAR (Awareness, desire, knowledge, ability, reinforcement)
- McKinsey 7S Model (strategy, structure, systems, shared values, style, staff, skills)
- Lewin, Bridges, Kotter etc.
At UCEM we have recently made the decision to purchase an assessment platform. We have gone through a very thorough decision making approach beginning with investigation of the existing process, writing a functional requirements document, identification of systems to evaluate and writing a systematic testing and evaluation plan (inspired by Educause rubric). We then went through systematic testing on VLE (using testing team and assigned roles), completion of decision matrix, an IT software and cloud solution approval process and finally delivered recommendations to the appropriate people. You can read about our final decision in our recent blog post.
Crowdsourced ideas
During the session attendees were asked to share their own ideas. The results can be seen on the mural board, please do add your own.
The main suggestions so far (10/9/21) are:
Do you use existing frameworks?
- ACTIONS model (2005), that evaluates technology for: Accessibility for learners; Cost structure; Teaching application; Interactivity or ease of use; Organizational impact on the educational institution; Novelty; and Speed to which courses can be developed for the technology.
- Our own framework
- We used an active learning framework, developed through focus groups with staff and students, to select tools for active learning
- Could Jisc help here?
What works?
- Getting stakeholders involved from the beginning works
- We have recently expanded our project management team. Prof project managers have been a relevant ion to me. Much more effective that being managed by academics
- Open communication with stakeholders/SMEs
- Project teams drawing from all relevant stake holder groups rather that say the IT dept running a project
- Clear regular, jargon-free comms. Include all stakeholders and test their use cases
- User testing with students
What doesn’t?
- Too top down purely driven by analysts who don’t understand the context
- Not involving people
- Not telling people what is happening
- Decisions based on free packages or easy bolt ons
- we struggled with systems that would not allow full testing. They provide a cut down version of the functionality and expect us to make a decision. Would not allow for example to see how it works with LTI or SSO.
- Budget
- Identifying a system and introducing it – even if it fit all criteria it may not be taken up by users
How can you ensure a thorough evaluation process that is still time and resource effective?
- Involve more people from wider teaching and learning community. This helps to test the expectations and reality in each team
Who should be involved in the evaluation process?
- All stakeholders really. Question writers, moderators, exam unit, students, Learning technology people, exam markers, quality unit, appeals, Student records and reporting
- Don’t forget to include local technical staff
- Senior leadership support
What are the challenges to a more formal process?
- Lack of buy-in
- Not fit for purpose
- Big long term projects prioritise resources meaning small projects (with possible big impacts) take too long to get done.
- Lack of time
Resources
Slides for the session are available from Slideshare and a video is on YouTube.
I am the Learning Technologies Production Manager at UCEM and manage the review, piloting, implementation and evaluation of approaches to support effective and innovative teaching and assessment.