Are some People Just “Great Test Takers”?
One of the questions we get asked most frequently is:
How does Fulcrum’s platform work for learners who are just “great at taking tests”?
We love this question because it underscores something we all know – that being good at test taking or scoring well on a test doesn’t necessarily indicate what someone really knows (let alone if they’ll be able to accurately apply their knowledge and skills 3 weeks, 3 months or 3 years from now). In fact, a lot of good test takers rely on a combination of short-term memorization, guesswork and short-term content familiarity to come up with the right answer. And assessment formats, like multiple choice and true/false, make it easier for good test takers to “game” the system, using test taking tricks and strategies to make better guesses.
All this means that while a “good test taker” might score well in a standard multiple choice test, they haven’t proved their mastery of the information. It’s very likely they will have significant gaps in their knowledge/skills that persist post training especially since “test taking strategies” are very short-term oriented. What this means is when it comes time to apply their training on the job, they’ll likely struggle to do so because long term retention was never truly assessed. This will be exposed even further with on-the-job Performance data (more on this soon…).
With workforce performance hanging in the balance, organizations need learning tools that can help learners achieve mastery and verify the application-level mastery and confidence of all employees – good and bad test takers alike.
What makes a Good Test Taker?
We’ve all known someone who’s a good test taker – someone who seems to have a supernatural ability to only study a little but still ace the multiple choice exam or advance through the training. Ask that person to recall some information a few months later, however, and they’ll likely draw a blank. That’s because that information only resided in short term memory. Research shows that what makes these people so good at taking tests is likely a mix of:
- Low test-taking anxiety which allows them to perform better in the moment
- Well-informed schemas that provide greater context and allow them to make more educated assumptions (guesses) especially when the test is multiple choice.
- Brain-science based study habits – like retrieval practice, repetition, spacing the learning out, etc. – that help them learn, remember and retrieve knowledge more effectively
Another reason that we’ve all encountered a good test taker is that a lot of people have been taught how to take tests at some point in their lives. For example, think of all the companies that make millions of dollars preparing high school students for the SAT or ACT each year. These prep programs certainly teach knowledge and skills, but perhaps more importantly, they teach test taking strategies that help students “game” these types of tests. And they work, however, it’s a short-term retention strategy and the majority of this information will never be retained for the long term . A report in the Washington Post last year noted that coaching can improve SAT scores by anywhere from 30 to 115 points. But, then again, college entrance exams are not at stake in your organization.
And as we mentioned above, some tests are more susceptible to guesswork strategies than others. For example, multiple choice tests, make it easy to “narrow the choices to two” and make an educated guess.
All this makes it hard to verify the subject-matter mastery of good test takers. Did they really know the answers or just make good guesses? Do they have hidden knowledge gaps that are glossed over by a good test performance? Unfortunately, many eLearning solutions on the market aren’t sophisticated enough to make the distinction between accurate guesswork and real knowledge.
How AI and Machine Learning Change the Guesswork Game
Enter learning platforms powered by AI and machine learning. They can make real time adaptations and help verify application-level mastery, making it nearly impossible for good test takers to “game” the system without truly mastering skills. For example, the Fulcrum Labs platform evaluates a variety of real-time variables like:
The Fulcrum Labs platform challenges application-level mastery by delivering multiple assessments on each learning objective beyond the simplistic multiple choice question type.They have to demonstrate that they’re progressing toward mastery and confidence, gradually and consistently improving, not just randomly answering questions correctly. And, because we use a competency-based approach, subsequent Sections of material are made available only after the learner is able to correctly answer the more complex and abstract assessment items. Additionally, Fulcrum can import an organization’s on-the-job Performance data and overlay it on top of the Fulcrum learning data to verify whether the person’s training indeed correlates with the on-the-job Performance.
Behavior (aka – Confidence)
Our platform monitors how a user behaves throughout the learning to help determine mastery and predict application. Unlike other platforms, we don’t ask learners whether they “feel” confident or not everytime they answer a question. This technique generates a huge amount of false-positives let alone decision fatigue. Instead, Fulcrum deploys its proprietary Behavioral & Knowledge Mapping (BKM) technology, which evaluates variables including: how quickly a learner answers questions, hesitation when selecting one answer and then switching to another, referencing source information to better inform answers, and even how a learner engages with hints and coaching prompts to determine mastery and engagement in the learning. As another example, when the platform’s AI notes several missed assessments, it suggests content to shore up the learner’s understanding. Does the learner take the opportunity to review the content or not? How learners respond to these coaching and feedback prompts can help determine if they’re really trying to learn, or if they’re trying to guess their way to mastery.
Mastery is all about consistency with confidence. In our platform learners can only advance once they demonstrate mastery across questions of different difficulties and complexities, that are tied to relational learning objectives. It’s not just a one-and-done process. We require learners to prove that they can consistently answer questions accurately and we periodically surface Memory Boosters correlating to Ebbinghaus’ Forgetting Curve to make sure this information is transferred from short term to long term retention. This prevents learners from relying on luck, good guesswork or test taking skills to advance through the training program.
We’ve talked a lot about good test takers, but what about bad test takers? Our system helps them as well. For example, our platform provides lots of opportunities for “low stakes” retrieval practice (a key brain-science based study tactic). This not only helps users learn more deeply, but it also builds their confidence so they have less test anxiety (a key factor in poor test performance). Additionally, our system gives users targeted, personalized hints and “Good to Knows” that improve their schemas and build the scaffolding necessary to make informed assumptions in the future.
All this is why we say that there is no such thing as a bad or good test taker in our system. Our technology cuts through to the underlying behaviors, motivations and competencies of each user to verify their mastery and predict future application. And because our system leverages many different assessment types that encourage active learning and measure deep thinking skills (e.g. drag phrase or fill-in-the-blank), organizations can assess employee knowledge on the application and analysis levels.
We turn good test takers and bad test takers into learners and learners into confident subject matter masters.