Question Marks

dice

I came across an interesting research paper from the Journal of Educational Technology & Society on the subject of online assessment last week. This is a key topic in e-learning as along with courseware it’s the most widespread formal use of e-learning in organisations.

Marking Strategies in Metacognition-Evaluated Computer-Based Testing looks at the effect of marking strategies. Catching the title I thought this would be about evaluation of tests and different ways you could go about scoring an online assessment, but not so. Marking in this case refers to the added functionality in an online assessment for a person to mark-up questions they answer as ‘uncertain’ during a test, with the additional option of coming back to it later before they submit their answers.

In addition to this marking-up facility that was at the core of the research the authors also designed a more comprehensive form of feedback, which they call metacognition-evaluated feedback (MEF). Simply put, MEF integrates students’ answers with their mark-ups, together with explicit feedback about a specific choice.

The study in question featured ninth grade (15yr old) participants taking a 30 question multichoice exam in vocabulary and reading comprehension. The authors looked at answering two questions:

  • Does mark-up improve student scores?
  • Does MEF encourage marking-up and review behaviour?

The authors found that:

  • student ability was crucial as to whether marking up improved test results
  • mark-up only improved ‘medium’ ability student test scores. Students with higher or lower ability didn’t show any noticeable improvement
  • mark-up increased efficiency and effectiveness of self-managed learning
  • mark-up did encourage across all abilities the behaviour of reviewing and reflecting on answers
  • mark-up facilitated greater metacognition by someone taking the test, actually questioning their own learning and understanding
  • MEF encouraged students to use mark-up skills more frequently and to review answer-explanations of test items

Incorporating this richer MEF at the end of the test certainly appears to offer a degree of formative assessment of student performance that feeds in to construct the next bit of learning needed. As the authors state: “Students made predictions about their test results and then observed what happened to check their predictions. If their predictions failed, they tried to determine how these mistakes occurred and then solved their problems.”

The mark-up system implemented was simply the option for a student to mark any given answer as ‘unsure’. This covered off a number of possibilities: 

  • sure correct: the student believed they were right, and they did indeed get the answer right
  • sure incorrect: the student believed they were right, but actually got the answer wrong
  • not sure: in this case the student wasn’t sure, and may have got the answer right or wrong

The results the authors obtained here also provided an insight and/ or confirmation of who the low performers were likely to be: they generally fell into the ’sure incorrect’ category.

As ever effective feedback following submission of results is crucial. As the paper states you should “provide useful adaptive feedback so that students [can] understand their performance, clarify their mistakes, and increase their learning motivation.” In addition to take into account the lowest performers feedback should be written in a way that will “encourage review behavior”, with the suggestion of making this feedback “adaptive and detailed” with specific examples.

This paper provides a good case for improving the basic multichoice assessment found in courseware, as well as providing the design features to include. My experience of courseware is that the majority of embedded tests and assessments aren’t designed with a mark-up function. It certainly looks like it would do no harm to do so.

Leave Your Comment