We’ve all gone through various digital learning courses whether it be for compliance, product knowledge, safety training or another relevant topic our organizations require us to address.
We may see a few pictures, or videos, or hear some voice overs, and then a questions is posed. You have to choose one of 3 or 4 choices and then what? Well if you get it correct you move forward in the same direction as the hundreds or perhaps thousands of other of people in the organization going through the exact same training. Hopefully you get them all correct or else you go back to the previous screen after receiving a little feedback with one less option. So now you’re down to 3 choices, a little less engaged, and at this point may be guessing. If you get it wrong again, the majority of participants would be totally disengaged and now try to click their way through as fast as possible. Eventually, that participant will get an adequate number of questions correct after multiple tries and complete the course. Not much learning nor change in behavior going on, right?
Besides the monotone nature of this sort of training, we haven’t even addressed what the data is saying based on the instructional design platform you’ve chosen to create this learning. The consensus is there are 2 or 3 instructional design tools dominating the market. If your LMS (& by most studies SCORM is still the major reporting output, although Tin Can is a slight improvement) is your preferred choice for data, you know who’s done what, how long did it take them, how many attempts and what was their score (more or less). If someone went through a course 2+ times, the length it took them to complete the course was probably reduced after each attempt and their score was acceptable in their final completion; would you be confident they’ve internalized the learning objectives that you as a instructional designer have laid out? Do you think they’ve been as successful with this learning initiative as the LMS is reporting? Let’s be realistic, the learner put forth the minimal effort to get enough correct so they can check off a box that they’ve completed the training with a sufficient enough score and then they can put it behind them, never to think of again. So what kind of value is your learning department really bringing to the organization if this scenario is all too familiar?
The need for better learning is evident across the industry! It’s not going to come from the current tools being used to create digital learning. For higher quality learning we need to focus on the learning architecture and improve work flows for more efficient and effective design. Through adaptive methodologies (i.e. branching), we can utilize learning measurement in a much more impactful way. Layering measurement through each scene of a course means we can track competencies, behaviors, & organizational specific gauges essential to understanding how well we as a learning department are effecting the organization.
When a learner is asked an open ended questions or asked to justify their responses, or even when going through a multiple choice type of question and they don’t make the ideal decision, would it not be better to see if that learner can find his or her way back on the optimum path of the training? Instead of telling, we’d have an excellent opportunity for teaching! We’d also immediately understand how to realign the focus of that learner’s development plan. If they passed through an entire course without mistake, we should realize they probably never needed that training. Conversely, if they make multiple mistakes from the beginning, there would be no need to introduce the entire course because remedial training would be quite apparent.
If we begin to measure scene by scene, decision by decision, with specific competencies, behaviors and gauges in mind, then the kind of data you’d have available would be quite robust and exceptionally more valuable. It would make the learning department’s standing in the organization a valued leader and key influencer as much if not more than any other department. If you look beyond the data of an LMS and see how learning measurement is the key to organizational prosperity, then those sort of departments transform from being a dreaded cost center to revenue driving profit center.
***Author, Zachary Konopka is Vice President of Skilitics. Skilitics Interact is an award-winning collaborative cloud-based training development platform, designed for building and deploying sophisticated learning across organizations of all sizes. Skilitics Thrive is a cloud-based analytics tool that captures and reports the real results of clients’ learning. It provides the entire organization, from the training department and stakeholders to HR and IT, with a tool to assess the impact learning is having across the business. From fundamental results and outcomes, to workforce development, ROI assessment, content analysis and logistical reporting.***
Please feel free to reach out to me to understand more about Skiltiics, our eLearning Platform Interact and our metrics engine Thrive. We can change the value your learning department brings to the organization in a tangible way with analytical data to support your department’s value. White papers, webinars, newsletter & live demonstrations are available upon request.