Bringing ‘Distractors’ into Focus: Multiple-Choice Design

    Education is inundated with multiple-choice assessment, due to the simplicity of crafting and grading questions. While I have no qualms with multiple-choice as a tool, I have seen a troubling pattern in my education and e-learning career concerning ‘distractors’. Distractors are incorrect choices that should seem plausible only to those well-versed in the assessment’s covered material. Too often, however, learners are presented with questions that at best have two plausible answers. Multiple-choice questions must detail equally plausible, concise answers in order to honor the test-taker and ensure optimal data quality.

A girl bites her pencil in half out of frustration


The Problem with Flippant and Tricky Distractors

    If you have never suffered feelings of disillusionment with learning, simply take to the internet to see how the feeling has spawned an entire meme community.

Students portray progressive disinterest

    When distractors are implausible or have been made flippantly, they contain a voice that says, “this isn’t really all that important. We’re just doing it to check a box.” Learners can develop false confidence and instructors get low-quality data. Learners may also become aware that are simply progressing through a system of “things we have to do” and slip into the aforementioned disillusionment with learning. Many times I see this in adult training where the designer gives a half-hearted attempt at humor in their answer choices. This really just underscores the fact that the designer believes what they are offering is boring and is fishing for some engagement. It comes at the cost of quality assessment and the learner feeling their time was respected.    

    Learners do not need more reason to feel that they are subject to a system that doesn’t consider them capable. Instructors need meaningful data to guide their instruction and make sure they are covering relevant material. Poorly-constructed distractors make both situations worse.

    Instructional designers create equal frustration when employing tricky distractors that are off-the-wall or confusing. Engaging in semantics, if that is not what is being assessed, literally distracts the learner from the task at hand and causes them to think about the meaning of words. This can be devastating for learners, who may feel hopeless after devoting time to study only to be hung up on technicalities. Instructors could easily overlook discrepancies with their data thinking their students simply lack the requisite knowledge, while diminishing opportunity to show that knowledge.

    Tricky and flippant distractors don’t honor the individuals being assessed. Assessment should cultivate the learners integrity and serve as a diagnostic tool by which instructors can grow their learners’ abilities. If things feel too easy or simply hopeless, we are really just wasting everyone’s collective time.

The Solution

    Poor distractors aren’t new; in 2004 Pearson published their findings on the issue, stating that “authorities in educational assessment have suggested extending the functional role of distractors to include a new purpose: identifying the nature of a student’s misunderstanding” (King, K.V. et al. 2004).

Image for post
Photo by Tachina Lee on Unsplash

    The solution to poorly-crafted distractors is to ask oneself questions like:

  • What does selection of this answer show about my learner’s understanding?
  • How close is my learner to grasping this concept?
  • What lessons would help a learner achieve mastery if this is what they selected?

    If you couldn’t readily come up with answers to these questions based on your distractors, then you probably have work to do to get them in good shape. There are so many more questions one could ask about distractors to ensure quality and plausibility, but these are a great start. Let’s look practically at an example.


Practical Example:


Distractors need to maintain plausibility to avoid:

A. Learners who are distractors themselves.
B. Questions becoming too distracting.
C. Data quality decreasing and learners becoming disillusioned.
D. College students from wanting to change their majors…again.

    Looking over this question, the stem is satisfactory and C is a succinct answer. It’s an easy answer for the most part, as the other options are poorly-written distractors. ‘A’ and ‘D’ both go for the half-hearted attempt at humor, but really have nothing to do with the question. ‘B’ has an air of plausibility, but that’s due to it making a confusing relation to distractors. Let’s rewrite this question.

Distractors need to maintain plausibility to avoid:

A. The assessment developing a theme that makes it repetitive.
B. Confusing learners about past material.
C. Data quality decreasing and learners becoming disillusioned.
D. Data quality rising while student engagement declines.

    I won’t pretend its a perfect question, and I wrote this to address something I have had to be very conscience of throughout my career, but this question now has four plausible answers to someone who hadn’t spent time thinking about the plausibility of multiple-choice distractors.

Wrap-Up

  • Avoid distractors that are too easy or too unrelated to the material
  • Plausibility helps maintain data quality while respecting the learner

Sources:

King, K.V. et al. 2004. The distractor rationale taxonomy: Enhancing multiple-choice items in reading and mathematics. Assessment Report. Pearson. (2004)

Comments