In my last blog, I took the position that key stage 3 assessment data is still important (even though Ofsted have somewhat side-lined it) but it cannot continue in its current form using descriptors as adopted in most schools. My idea is not revolutionary or even new but I believe we can improve the way we track progress in key stage 3 by simply using curriculum end points and pupil answers to convert to a percentage of what knowledge was learned and remembered (In maths terms: correct answers/total number of defined end points = percentage of progress). The knock on effect of applying this system opens up an opportunity to rethink the actual “type” of assessment we use to judge pupil progress.
In my experience, the majority of key stage 3 summative assessments are designed around extended answers or essay questions, in which the pupils’ response and then it is marked using descriptors or mark schemes. Sorted. But this process is extremely subjective and, in my opinion, can raise many questions. Does the assessment fully capture key knowledge and concepts that were intended for the pupil to learn? How experienced is the marker? Is the data reliable? For example, a history essay question on the economic and social impact of the Treaty of Versailles and the rise of Hitler can quickly turn into an essay on the Holocaust. Therefore, did we learning anything about what pupils learned or remembered? What are assessment objectives clearly met? Whereas using multiple choice questions can cut directly to the intended concept or knowledge or skill with a high level of reliability in what the final outcome is telling the pupil and us.
The use of multiple choice questions marries up nicely to an assessment system based on defined end points because it allows for end points to be directly tested and converted into a percentage. Pupil A answered 30/50 on their assessment on the factors of World War I; therefore, it appears that Pupil A knows and remembers 60% of the intended learning.
There are many arguments against multiple choice tests – the pupils can guess or they are too easy or they can’t be used in all subjects. All the concerns are valid but can be overcome with high quality multiple choice design (therefore creating multiple choice questions isn’t a natural talent, it needs to be developed).
Five reasons why I propose that the advantages of using multiple choice exams outweighs its poor (and undeserved) reputation.
- Research suggests that multiple choice assessments are more reliable than others tests (Burton et al 1991); therefore giving us a more concise picture of what pupils know and don’t know and can easily be understood and communicated to pupils and parents.
- Multiple choice tests are marked objectively, not subjectively (best-fit) like with extended questions and essays; therefore, marking is more equitable and fair for the individual pupil and when making comparisons between key groups.
- In addition to testing for factual knowledge, multiple choice questions also evaluate high order thinking skills such as synthesis and analysis. They are not just a yes – no test, multiple choice questions can dig deep (Christodoulou, 2016, see examples pgs164-168).
- Multiple choice answers can include misconceptions as distractors so that you are not only getting data on the right answers but also on what they think was correct.
- Multiple choice assessments will have a positive impact on teacher workload. It will take time to write high quality questions but they are a long term resource that are easy to review (revise), mark and track pupil progress.
I am not suggesting that extended answers and essays should disappear from testing because they still play an important piece in supporting pupil learning. What I am saying, however, is that multiple choice tests should not be dismissed as they can provide concise, reliable data that can improve the effectiveness of how we track pupil progress (knowing and remembering more) across key stage 3.
Burton, S. J., Sudweeks, R. R., Merrill, P. G., & Wood, B. (1991). How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty. Provo, UT: Brigham Young University Testing Services and The Department of Instructional Science.
Christodoulou, D. (2016). Making Good Progress? The future of Assessment for Learning. Oxford, United Kingdom. Oxford University Press.