What Makes a Good Assessment?
When we design assessments at Second Avenue Learning, we remember our experiences as learners. As learners, we remember when tests were fair, clear, and well-connected with what we were supposed to be learning. That doesn’t mean that we necessarily enjoyed the testing experience, but we could tell when a test was well designed. We also remembered what it felt like to take tests that were confusing and arbitrary.
As assessment designers, we keep all of those experiences in mind, working to produce assessments that we would respect if we were taking them. That’s a good goal, though it’s easier said than done. Achieving it means we have to break down the issue a little more. So let’s think about some of the qualities of a good assessment. Here are some of the most important ones:
Assessments are designed to measure things, but to be effective, they need to measure the right things. Great math questions don’t help you if you really want to measure grammar. To figure out what’s relevant, we always return to the learning objectives, because they tell us what people are supposed to be able to do. Good objectives are measurable, and so learning objectives and relevant assessments go hand in hand.
Sometimes, the learning objectives call for the learners to memorize information. In that case, it is legitimate to test whether they know key facts and details. But if the learning objectives call for higher-order thinking such as analysis or evaluation, then the questions need to test those higher-order skills. If we’re testing writing ability, then the learners will need to write something. If we’re testing collaboration skills, then at some point the learners will need to work with other people. If we’re testing critical thinking skills, then learners will need to identify assumptions, analyze arguments, and draw sound conclusions.
As much as possible, questions should not allow for reasonable interpretations that support different correct answers. This is harder than it sounds, since in everyday speaking and writing we allow for more interpretation than we can in the world of assessment. Also, people are different, and may reasonably interpret terms differently. That’s why assessments need more than one pair of eyes, and why user data is so helpful in figuring out whether the learners are interpreting tasks in the way intended.
No one who writes questions wants to be unfair, but we still have to be alert to keep questions free from gender, racial, religious, and cultural bias. We need to make sure that we don’t make invalid assumptions about our audience, to make sure we’re measuring what we’re supposed to measure. In a sense, fairness is related to relevance and clarity, because if an assessment is unfair, then chances are it’s either irrelevant (for some, at least) or unclear (again, at least for some). Still, it’s worth articulating fairness as a distinct principle and goal.
A meaningful assessment tells us something about the learner, and for that to happen, some people have to get it wrong, and some people have to get it right. A question that everyone (including people that have not studied the content) gets right is not a good question, because it does not indicate anything about the skill of the learner. Questions that everyone gets wrong have the same problem, and erode confidence as well. Of course, it is not enough that a significant proportion of learners get the question right and wrong. Those who answered it correctly should to know the subject matter better than those who answered it incorrectly, and that happens when there’s a close connection between the questions and the learning objectives.
These are our principles of assessment. Here and elsewhere at Second Avenue, we’ve found that framing the issues is a good first step in right direction. It’s easier to get to your destination if you know where you are trying to go.