A student of mine sent me this TED Talk saying, “Dr. Harsy this makes a good case for your mastery type exams!”
A student of mine sent me this TED Talk saying, “Dr. Harsy this makes a good case for your mastery type exams!”
Hi everyone, Amanda Harsy and Jessica OShaughnessy here to add a few thoughts on a variation of Mastery-Based Testing. Both of us have been using Mastery-based Testing in our courses over the last two years. Jessica uses MBT in Calculus I, Calculus II, and Introductory Statistics, while Amanda uses it in her Calculus II, III, and Real Analysis courses.
We usually have 16-18 concepts in our Mastery Exams and students can add to their exam grade by mastering these concepts. Now we have a choice: should we treat all the concepts as equal? That is, can students master any of the concepts to build their grade? In some courses, it may make sense to let the students choose.
But what if you feel that not all concepts are created equally? That is, are there some concepts you really think students should have grasped after taking a class?
Both of us use a slight modification in our mastery-based grading to address this belief. For example, since Calculus II is a sequential course, there are certain concepts we think all Calculus II students should master in order to be successful in Calculus III. We want students to successfully be able to differentiate transcendental functions, calculate area, and use integration by parts. In order to enforce these concepts, we have introduced “Core Concepts.” In Amanda’s Calculus II course, for example, these concepts include differentiation and integration of transcendental functions, L’Hopital’s Rule, Advanced Integration Techniques, Calculating Area, Calculating Volumes, and Interval of Convergence for Series. Students must master these 7 concepts in order to earn at least a C for their exam grade. Similarly, Jessica breaks her Calculus I classes into 16 topics and 7 core/required concepts. The students must master these 7 before any other questions count. If they do only these required concepts, they will receive a 70% as their test grade.
We both have a love/hate relationship with core concepts, which we tell students are “required” concepts. We’ll start with the love:
The cons (we will use cons instead of hate since we think hate is really too strong):
Overall, we both really like core concepts. This way, we can make our derivative questions hard enough that students will really show us they have mastered the concept without the ability to skip that concept because it is “too hard”. It allows students to be better prepared for their sequential courses and encourages them to review their old material. If you worry that students will skip some important ideas that won’t necessarily show up in other concepts because of the mastery-based system, this may be a good variation for you!
Hello! I’m not Mike!
My name is Derek Thompson, and I’m an Assistant Professor at Taylor University in Upland, IN. Like the others on this blog, I’ve seen the transformative power of mastery-based testing. And like Mike, I’ve seen how powerful it can be when combined with specifications grading. What this post will be about is a huge mistake I nearly made – here’s what not to do with specifications grading and mastery-based testing.
One of the largest perks of mastery-based testing is getting the buy-in from students on day one. The idea that they can retake exams, and that they can even skip the final if they do well, is alluring and comforting. Specifications grading takes this a step further. My syllabi were carefully designed last semester to adjust one aspect of MBT and to amplify another.
First, weekly quizzes (“quests”) were the first crack at a topic, the two midterms were the second, and the final was the third. This meant that unlike the traditional MBT model, each topic was given three attempts rather than favoring earlier topics with more attempts. Second, if you look at the specifications outline below from last Spring’s Advanced Calculus course, you can see that grades can only go up. Mixing MBT with traditional weighted percentages for other aspects of the course kept me from making this claim previously.
There were a number of growing pains with this method in my first run in the Spring, but overall I think it was a success. The main problem was that the topics were too easy, and in that course no one even needed to take the final. The university, of course, wants us to have a final on record in case grades are contested, and likewise midterms are somewhat needed to indicate poor grades at midterm to coaches, advisors, etc.
With specifications grading, it’s easy to simply add more specifications, although you run the risk of making the syllabus too complex. When I presented the ideas at Mathfest 2016, I presented this “solved” list of specs for my Discrete Math for Computer Science course.
Do you see the problem? While it makes sense that the final can be its own grade (and you don’t see this on the chart, but it would still count for a retake of each core objective), one of my main benefits of specs grading is now a lie. Students can do all the work and tank the final, and they would receive an F in the course by this model. This generated some awkward looks in the audience, and I thank God that I gave a poorly thought-out presentation before I gave my students a poorly thought-out syllabus.
I said something in that talk that I think is succinct and profound, and illustrates why specs grading is the natural evolution of MBT. Last Fall, before I did specs grading, my students were already doing mastery proofs and redoing WebAssign problems in addition to MBT. The whole course was mastery-based – except for my syllabus. That’s important, and the syllabus I presented failed to keep that aspect from last Spring.
With that plan properly in the garbage, I’m still presented with the problem of needing students to take the midterm and the final. I’ve got a bad habit of moving by miles when I should only go inches if I’ve already got a good framework. Keeping myself in check, my solution for this semester is to keep the weekly quests but have some new topics on the two midterms, just like there would be in traditional MBT. These topics can be attempted again on the final. Here’s the corrected Discrete syllabus:
They can still opt out of the final if they’ve done well enough, but since it will only be the second attempt for the midterm topics, it’s not as likely. It guarantees a large exam grade that I can use for midterm reports. While it doesn’t allow for equal attempts at each topic, it allows me to categorize them – the heavy stuff will be on the weekly quests when it’s fresh and retested twice; the midterm topics will be lighter reminders of old material that only gets retested on the final. Of course, students will have a few “coins” (I call them “power-ups”) to do specific retakes in my office, if they need to. It’s also a nice way to hand out “extra credit” by allowing retakes for doing extra work – they’ll still need to redo the original material correctly.
You’ll also notice that I have twice as many core objectives as I had Quests for Advanced Calculus – breaking the objectives into smaller chunks will help relieve pressure and also make it more likely that they’ll have a few odds and ends to show up for on the final. It’s also consistent with the ACM standards for discrete math, which include about 40 learning objectives. Having some new ones as midterm topics helps space out this large number as well.
And you might also notice that there’s still one way grades can go down – unexcused absences. This is consistent with official university policy, but now it’s neatly wrapped into the specs so that students are reminded of the issue, instead of it being in a random corner of the syllabus and forgotten.
To summarize, I think that one of the most appealing aspects of MBT, and specs grading (done right) is how grades only increase. Students begin with an F, but they can never lose credit for work accomplished. That’s a huge motivator, and encourages growth mindset, grit, and all of those other learning dispositions we’ve been talking about. Feel free to comment below with questions, or send an email to firstname.lastname@example.org.
In part I, I began describing some of the nuts and bolts of implementing specs grading. In part II, I’ll finish describing some of the finer details of the system, and also discuss how MBT fits naturally into a specs system.
I first implemented MBT in a Calculus I course in the spring of 2015 and realized, as Austin writes, that MBT is “self-evidently better” than traditional, points-and-partial-credit-based testing. I’ve used some form of MBT in every course since then.
Around the same time, a friend retweeted Robert Talbert of Grand Valley State University that made reference to “specifications grading”, a system proposed by Linda Nilson (I would strongly recommend you read her book if you are thinking of implementing specs grading). With my mind already opened to the possibility of alternative grading systems, I took the plunge and read several of his blog posts, then at the Chronicle of Higher Education, and now hosted on his own site. To be clear, most of what I have learned about implementing specs grading in a math course, I have learned from reading him. But in this post and the next, I’ll do my best to explain how the system works, and how naturally MBT fits into such a course.
This post was written by Jeb Collins.
Welcome back to the nuts and bolts of mastery-based testing. This post is part 2 about the logistics of actually implementing this assessment method in the classroom. If you haven’t read part 1, Katie does a great job explaining about how to prepare before the semester and what happens during the first week. In this post I will be talking about what happens once the semester starts: how to create the multiple tests and quizzes that are written throughout the semester, how to grade those tests and quizzes, and how the final grade is calculated. I’ll be writing about my experiences and choices in how this is implemented, with the huge caveat that there are many ways of implementing this method, and mine is simply one of them. I’m sure you’ll see many others in future posts.
Choosing Exam Problems
Once the mastery topics are chosen, then the individual problems must be chosen to appropriately test those topics. There are a couple of things to keep in mind when choosing the problems. First, one of the main differences in exams from a points-based approach is that you need multiple versions of each question. Therefore, you want to choose questions that can be easily modified. I have found that such questions tend to come in two forms. The first is something like a differentiation or integration question, where the question can be changed completely by a small change such as . It is generally very easy to come up with multiple versions of those questions. The other form are word problems, such as a related rates. In order to qualitatively change the question, a good deal of work is needed to come up with a new problem. For those questions I like to find two or three such questions that are qualitatively different, and then create different versions from those two or three by simply changing numbers.
I also choose slightly harder problems for my mastery-based tests. I have a couple of reasons for this. First, I don’t give a cumulative final, so once a student has mastered a topic they are never tested on it again. So I want to be sure that students who obtain mastery on a topic have truly mastered the material. Also, each test has only four new problems on it, and therefore is slightly shorter than a typical points-based test.
Finally, something to consider when writing questions is their length. Mastery questions will often have multiple parts to them, since most topics cannot be tested by one problem. However, this can lead to trouble if the question is too long, since failure on one part can lead to a loss of mastery on the whole topic. I have one example from Calculus 2 where I wrote an integration question that included both trig substitution and partial fractions. I noticed that often students would get one part right and another wrong, and therefore wouldn’t get mastery on the topic. The next semester I decided to break both of them up into two separate topics, and it worked much better. I had more total mastery topics, but the students were better able to demonstrate their ability on the different integration techniques.
Something I didn’t realize when I started using master-based tested was that writing the tests can be considerably different from a points-based method. In a points-based method, each test written is the same length and covers distinct topics. In mastery-based testing, each test is longer than the last, covering new material in addition to different versions of problems from the previous tests. In a points-based method, the quizzes are uniform; that is, each student takes the same quiz. In mastery-based testing, I may have to create as many as 18 different quizzes for one day, as each student may wish to attempt a different problem. All of this adds up to needing a different method to create these tests and quizzes in an efficient manner.
What I like to do is create a library of problems for each mastery topic. I will create a different file for each topic and store all the versions of my problem for that topic in the file. I usually create about 3-10 problems for each topic, creating less versions for the later topics since they will be used less often. This is a good thing to do during the summer or winter break, when I have some extra free time. With this library in hand, writing the exams or quizzes becomes a matter of copy and paste, and can actually be done very quickly.
In mastery-based testing, quizzes are an opportunity for students to obtain mastery on a single topic between the exams. Since failure is not only expected on the exams but is considered beneficial for the students learning, I want to give my students as many opportunities as possible to demonstrate to me they have learned from their failure on previous tests. As the semester goes on, the exams get longer and more stressful for the students, and quizzes are an opportunity for students to demonstrate mastery on a single question. This obviously increases their grade, but it also leaves fewer questions for them to attempt on the next test, which reduces stress. For these reasons I actually find students asking for more quizzes than I have time to give.
When I give quizzes, I let each student choose which mastery questions they would like to attempt on the quiz. This only makes sense since each student will have mastered different topics on the previous exams. For conservation reasons, I don’t print out quizzes with all the questions on them. I have the students email me by the day before the test to let me know which problem they want to attempt. This is usually the biggest problem because students forget about the email, and are then unable to take the quiz. So I remind them often for about a week before the quiz to send me a quick, one-line email, and I also print out extra quizzes . Mostly, by the time the second quiz happens, all the students who want to take the quiz remember to email me. I always give the quiz at the end of the class period. This is because there are some people who will not take it, either because they have mastered all the topics up to that point, or because they are simply not prepared by the time the quiz comes around. I am fine with this, and just let the students leave early.
I use the following grade distribution in my mastery-based classes:
I choose to have the majority of the student’s grade decided by the mastery score. My main reason for doing this is because I design the exam questions such that if they show mastery of them, they understand the material of the course. Another reason is because the mastery exams are where growth mindset is emphasized in the course, I want that mindset to be emphasized for the students.
One aspect of mastery-based testing that I have found to vary widely implementers of this testing method is how the average test score is calculated. The way I calculate the test average is to subtract 5% for every question not mastered. This of course means that you can master no questions and still get a positive exam average, but since this would result in an F under any reasonable grading scale, it really doesn’t matter. However, this does make it somewhat difficult for students to understand how to calculate their grade as the semester progresses. To help with this, I provide the following table on my syllabus:
This helps the students get an idea at the beginning of the semester of how many questions they need to master to obtain the grade they want. More importantly, it helps them determine their grade during the semester. This table also allows me to easily emphasize the importance of doing homework, as the students can easily see that doing poorly on homework requires them to do very well on exams to compensate.
Hopefully these posts have given a good broad overview of how a mastery-based class could be run. As I mentioned above, this is only one way in which the class could be set up. There are really as many implementations of this method as there are instructors using it, but these basic guidelines should give a good starting place for those of you thinking about trying out mastery-based testing.
This post is aimed at educators who are are considering taking the leap into mastery/standards/specifications-based assessment, but aren’t sure where to begin. My biggest question when I first heard about mastery-based testing (MBT) was: No points(!) — how does that work?
This post is part 1 of the ‘Nuts and bolts’ instructions for how to do points-free assessment with MBT. Part 1 deals with the logistics before the course begins — how should you write the syllabus and explain MBT to your students? Part 2 explains the specifics of writing exams and grading that you will be dealing with once the course starts.
Taking a mastery-based testing (MBT) approach to a course requires some advanced planning, but it is the type of planning that not only helps with the assessment but also with the long-term goals of the course. If you are nervous about trying a new assessment method like MBT or specification grading, you might consider starting it in a course you’ve taught before. I did this during the first year of my current job — I taught Calc 1 in the fall with a normal points system (100 point exams, 10 point quizzes, etc). The following semester I taught a section of the same course, but used an MBT approach to assessment.
Feeling comfortable with the content is important for laying the groundwork of mastery-based assessment because you’ll need a good idea of what it is the students should master in the course before you begin. You can think of it as writing the final exam before you write your syllabus. If you are the type of person who has already written your final for the course you’ll teach next semester — then congratulations, this will be easy for you! If instead you are like me and sometimes write the final exam during finals week, then I’m here to convince you that it’s still (mostly) easy to use MBT.
The first step is to create a list of the essential skills and topics that the course will cover. If you haven’t taught the course before, creating a list of learning outcomes is one of those recommended teaching techniques that (to be honest) I didn’t hear about until I had taught several courses on my own. [See, for example: Writing Student Learning Outcomes] In short, it is a good idea, even if you aren’t using MBT next semester. Your list might consist of both general skills like “students will engage in problem solving” and also subject-specific techniques like “students will be able to take the derivative of a composition of functions using the chain rule”. How you group the learning goals for assessment depends on the course and your own preference. For simplicity, you may initially try to limit yourself to 16 umbrella topics, and students can try to master each topic. For examples of lists like this, see the resources page.
I’ll talk more about choosing the list of skills, but now it’s time to write the syllabus!
Syllabus explanation of MBT
Your syllabus will look mostly the same as normal, except it should have some explanation of the mastery-testing method. Whether you put the list of topics on the syllabus is up to you — I did not put the list of topics the first time I used MBT because I was still figuring things out when the semester started. Now, however, I do try to put the list of topics on the syllabus because it helps to have the list easily accessible to students.
In the syllabus section where you would normally explain how many midterms there will be and whether the final will be cumulative, you will now have to explain how MBT works. I borrowed most of this directly from Austin’s syllabus, but this is just a starting point. In my class, I call the mid-terms “Quests”. The final exam is really just another Quest. Here is the MBT statement from my syllabus for Calc II.
Grades in this course will be determined by an assessment system that relies on mastery of 16 sorts of problems. For each type of problem you will have multiple attempts to demonstrate mastery. The examinations (Quests and the final exam) will all be cumulative. The first Quest will have 5 problems, the second will have 5+4=9 problems (with 5 being variants of the ones occurring on the first quest), the third will have 9+4=13, and the fourth quest and the final exam will both have 16 problems. There may also be re-attempts periodically to allow for further attempts at each type of problem.
I record how well you do on each problem (an M for master level, an I for intermediate level, and an A for apprentice level) on each quest. After the final, I use the highest level of performance you achieved on each sort of problem and use this to determine your course grade.
If at some point during the semester you have displayed mastery of each of the 16 sorts of problems, then you are guaranteed at least a B+ (homework and Maple proficiency will determine higher grades). The grade of C can be earned by demonstrating mastery of at least 12 of the types of questions. If you master at least 8 of the types of problems you will earn a D-. A more detailed grading table is given below.
(See Jeb’s post on Nuts and Bolts: Part 2 for an example of a grading table.)
This method of arriving at a course grade is unusual. There are several advantages. Each person will have several chances to display mastery of almost all of the problems. Once you have displayed mastery of a problem, there is no need to do problems like it on later exams. It is possible that if you do well on Quests you may only have one or two types of problems to do on the final exam. It is also possible that a few students will not even have to take the final exam.
This method stresses working out the problems in a completely correct way, since accumulating a bunch of Intermediate-level performances does not count for much. It pays to do one problem carefully and completely correct as opposed to getting four problems partially correct. Finally, this method allows you to easily see which parts of the course you are understanding and which need more attention.
If at any point during the semester you are uncertain about how you are doing in the class, I would be very happy to clarify.
[Aside: Another MBT enthusiast uses something like Padawan/Jedi/Knight instead of Apprentice/Intermediate/Master. Come up with your own names, you can. Yes, hmmm.]
Writing a statement about MBT in the syllabus is a first step towards getting student buy-in. It is equally important to have a prepared summary you can give on the first day or during the first week about what MBT assessment is and why you use it.
Selling the idea to students/first day comments
On the first day of class, I spend about 10 minutes explaining how my version of no-points assessment works, and why I choose to determine course grades in this way. My biggest selling point to students is that I want to give them more than one try to demonstrate that they understand the course material. It is helpful to emphasize that while the standard for mastery is high, every single person in the class has the chance to succeed with MBT.
Most students have not encountered a mastery-grading system before, but they are usually excited by the idea that they may not have to show up for the final exam if they master all of the topics during the mid-term assessments. (That is, if you choose to handle your course in this way — there are other models that require a cumulative final exam regardless of mastery levels during the semester.)
When explaining the system to my students, I use the syllabus explanation as a guide, and I add in my own personal reasons for approaching assessment in this way — here are a few.
When presenting the MBT system in the first week, I try to be relentlessly positive. Usually it goes over well. Some students may wish to leave the class because it is different from what they are used to, and that’s ok too! On the whole, enrollment in my classes has been steady since I’ve started to use mastery grading.
Incorporating other graded components in the course
It is easy to incorporate other components of the course that are not appropriate for an in-class assessment — for example, practice homework problems, projects, oral presentations, Maple projects, and so on. You can choose to grade these other components in the mastery style (M/I/A or K/J/P) or you can revert to a standard points system for these components. I’ll demonstrate with an example of how I incorporated homework into Calc I and II.
I used WebAssign (an online homework system) for daily homework in Calculus I and II. I gave students repeated attempts on the homework, just as I did on exams. In the case of homework, 6 attempts was the cutoff. In the final course grade, homework counted the same as two of the “mastery topics” — if a student got at least 70% of the homework problems correct (eventually), I treated it as one mastery. If they got at least 90% of the homework problems correct, I treated it as two mastery. Therefore, my 16 “topics” grew to 18 in the final course grade, and two of them aren’t really topics at all, they are dependant on homework. There are other ways to incorporate homework or presentations — see the resources page for more examples.
Deciding on the list of skills to master
I recommend drafting a final exam that you would give in a standard version of the course, and determining the skills list based on that exam. It’s a great way to see what you are hoping that students accomplish. Some problems may require multiple different skills, so those sorts of problem may become two different topics. Others might not merit an entire topic on their own, but they might be easily grouped with another type of problem. For example, I had a topic in Calc II called “Applications of Integration to Arc Length, Surface Area, or Probability”. That topic usually had a question with two unrelated parts — but both parts used integration to solve a problem.
It’s a process of trial and error, for sure! I have edited my list of topics from past classes, after discovering that some topics shouldn’t be given equal weight in the final course grade.
Jeb’s post continues with an in-depth view of choosing exam problems, dealing with quizzes, and grading.
Welcome to the inaugural post on the Mastery-Based Testing Blog. We are all excited to share an assessment technique that many of us find to be superior (at least one of us self-evidently so) to traditional exams. (I should credit George McNulty at the University of South Carolina for introducing me to the idea by way of subjecting me to the exams as a graduate student.) What follows is an essay form of a short talk I’ve given at a few national conferences. My aim here is to give a brief overview of the method (others will speak in more detail about specific implementations) and share several aspects in which you and your students may find it preferable to traditional points-based exams.
A brief word about the title: It was intentionally chosen to be half facetious and half sincere. On the one hand, I lack the evidence to make a statistically significant case that this method promotes student gains, so my defense is to shame the reader for demanding such evidence; the claim stands on its own. (This is where you struggle to gratify me with brief chuckle.) More seriously, I think the aspects in which the method shines are varied and even somewhat nebulous, as I hope to demonstrate. To choose a single metric like improved grades or increased retention is, I claim, to miss the point somewhat. Large-scale studies on best practices are certainly worthwhile, but the end result is a claim about effectiveness that has been averaged across many different institutional missions, instructional strategies, student bodies, etc. As you read my thoughts on mastery-based exams, consider the impact you think its implementation would have at your institution, in your courses, and with your students. I hope that thinking about your particular details will help make it “self-evident” to you, at least, that the idea is worth exploring further.
What Is It?
As I mentioned in the introduction, my aim is to give only the barest sense of the method. My colleagues will provide more details on specific implementations. Among all the implementations, however, there appear to be three essential characteristics.
Clear Content Objectives
In my study guides, I partition course content into roughly sixteen big ideas, each consisting of a few related subtopics. For example, under the heading of “Limits” in multivariable calculus, my students would find:
I feel that a student needs to demonstrate proficiency in all these situations before claiming to be a “master” of the concept of a multivariable limit. I accomplish this by including a single question on limits consisting of three parts, one for each item. There is no hint as to which part requires which technique, since selecting an appropriate tool for a new problem is itself a skill I want to develop.
Credit Only For Eventual Mastery
There is no partial credit in a mastery-based exam; a question is either mastered or it is not. What does it mean to master a question? One of the refreshing aspects of the method is that “mastery” is at the discretion of the instructor. My standard tends to center around a single question:
Will the student benefit from studying the objective again?
In the multivariable limit example, consider a squeeze theorem proof that goes off the rails early due to some absent-minded factoring error, but otherwise correctly applies the theorem. In my opinion, this problem is mastered; the student has demonstrated proficiency with the intended idea. Contrast this with a proof where the student divines the correct limit, but does not justify the bounds used in the proof. This student would most likely benefit from taking a second look at how inequalities allow the squeeze theorem to accomplish its goal.
Multiple Attempts With Complete Forgiveness
To allow students time to incorporate instructor feedback and progress toward mastery, it is important to allow multiple attempts on each big idea. Moveover, to emphasize our desire for eventual (rather than immediate) mastery, I believe it is crucial that previous failed attempts carry no penalty. There are many creative ways of accomplishing these goals, but my exam structure typically resembles the following:
(For Tests 3 and 4, I often split the old questions from the new and hold the test over two days to give students more time to work.) So, for example, a question addressing Objective 1 appears on every exam. The versions appearing on each exam are different enough from one another that rote memorization is no help, but they are similar enough that they are clearly addressing the same objective. Under this exam structure, a student could fail to master Objective 1 four times without penalty, only to display true mastery on the final exam and earn full credit. At the other extreme, a student may display mastery on the first attempt and simply ignore the alternate versions appearing on subsequent exams. A student need only display mastery one time to earn credit for the objective. (I emphasize this point to stress that no student is attempting to solve sixteen calculus problems in an hour. Each student approaches an exam with a personal list of 3 – 5 objectives they hope to master on a given attempt.)
It is an unfortunate reality that many of our students will not or cannot devote as much time as we might like to mathematical study. How does the structure of our assessment impact the way in which these precious hours are used? In the following sections, I hope to demonstrate some important ways in which I feel a mastery-based exam surpasses a traditional points-based exam.
Depth of Knowledge
Points: Understand everything superficially.
Mastery: Understand some concepts in great detail.
In most points-based systems, a blank exam question is a heavy blow to a student’s grade. On the other hand, a student who provides a couple relevant formulas and something resembling the beginning of a solution may receive half credit or more. In the presence of constrained study time, a good strategy is to learn some basics about every test item. Such a student may earn half credit on most items together with a few lucky shots on easier items, which amounts to a passing grade overall. Take a moment to consider whether this experience has adequately prepared the student to apply mathematical thinking to nontrivial problems in the future.
The “broad and superficial” strategy employed above earns no credit under a mastery-based system. Instead, a student who wishes to earn a passing exam grade must fully understand an appreciable subset of the main ideas of the course, and a student wishing to earn an A grade must fully understand most or all of the main ideas of the course. Even if students spend no time studying a particular item, I contend that the experience of pursuing deep understanding on the other items leaves them in a stronger position to engage deeply with the troublesome topic when it is needed in the future. Moreover, depth of understanding is critical to one’s ability to apply existing mathematical knowledge in novel domains.
Meaningful Office Hours
Points: “How can I easily pick up some points?”
Mastery: “What should I study to fully understand this concept?”
Making grades a function of understanding rather than accumulated trivia lends itself to more meaningful discussions during office hours. A student seeking only to gain more points asks superficial questions with easily-memorized answers (items that are readily available in the textbook). A student seeking mastery must be willing to reflect upon their partial understanding, ask pertinent questions that address their current misunderstanding, and synthesize the conversation into a holistic approach to the concept.
Points: Try a problem once (maybe twice) and hope for the best.
Mastery: Keep trying until you succeed (and I know you can).
One might make the case (and I frequently do) that even students who are far removed from science will benefit from mathematical study because it is exceptionally effective in training students to persevere in solving complex problems. Points-based assessment undermines this valuable experience; a student can often obtain a passing grade without working even a single problem to completion. Indeed, even those who might take a second look at a challenging exam problem may not have incentive to do so if it does not appear on the final. To contrast, a mastery-based exam requires that a student display satisfactory depth of understanding to receive any credit, and this may take several attempts. It falls on the instructor to engender a classroom atmosphere in which these multiple attempts are seen not as shortcomings, but rather as the very essence of deep learning.
Points: Review key concepts for the final (maybe).
Mastery: Review key concepts now.
A points-based system can provide a perverse incentive to ignore key early concepts. As an example, I ask students to provide the vector equations of lines and planes to help develop their intuition about the geometry of vector operations. Under a typical points based-system, a student does not directly benefit from revisiting these foundational concepts. Rather, they are encouraged to press on toward of superficial understanding of applications of that concept, which is both counterproductive and meaningless.
A mastery-based system gives students an immediate reason to revisit important concepts early in the semester, since they will have an opportunity to master it on the very next exam. Continuing with the same example, I did, in fact, recently work with a student who demonstrated poor understanding of vector geometry on the first test only to master the concept on the second. After receiving his graded test, he remarked that studying for that introductory question helped him better understand the new content. It seems understanding vectors is essential to understanding vector calculus.
Points: Failure is undesirable and incurs penalties.
Mastery: Failure is an opportunity to improve understanding.
The effect of testing on growth-mindedness is, in my estimation, one of the most important facets of this discussion. A points-based system sets arbitrary deadlines by which time perfection must be attained or else penalties are incurred. Each helpful remark from the instructor is coupled with the sting of a progressively lower grade; the more helpful the remark, the greater the deduction. A mastery-based grader can include plenty of penalty-free insight to help the student improve their understanding. Such feedback is actively desired by the student and is sure to be studied since subsequent exams will provide opportunities to earn the missing credit. The message of the mastery-based exam is well-aligned with the development of a growth mindset: “You don’t understand this concept yet, so here’s some advice on how to improve. Come back next time and show me what you’ve learned.”
Reduced Test Anxiety
Points: Every test has the potential to “ruin” GPA.
Mastery: No one test can harm grade.
Among all the items on this list, this one seems to vary most wildly across departments. My students frequently tell me that being able to try questions multiple times with no penalty considerably reduces their test anxiety. Some of my colleagues, on the other hand, report student frustration when they receive feedback on several near misses (which receive no credit) on their exam. There is no panacea for student discontent, but my advice is to hear their complaints while frequently (and gently) insisting that you think this obscure testing scheme is better for them in the long run. I also suggest you keep an eye on your gradebook. If a student is making unsatisfactory progress, consider a short office meeting where you and the student map out a plan of study for the rest of the semester. While I still contend that the general student reception of mastery-based testing is positive, a student with only one problem mastered at midterm is likely to be feeling quite hopeless. A short pep talk and a realistic plan to climb out of the pit will go a long way.
Points: How many points is this error worth?
Mastery: Will the student benefit from studying the concept again?
Points-based grading is inherently punitive; one must examine a proposed solution looking for opportunities to deduct credit. Moreover, the resulting feedback may not be terribly meaningful. Imagine two students computing the volume of a solid of revolution. One arrives at the correct integral but makes several errors in its evaluation. The other begins with the wrong integral but evaluates it perfectly. A points-based grader must decide how many points to deduct for each of these very different sorts of errors. On a mastery-based exam, one rather asks whether the student will benefit from additional study. Here the answer is clear and meaningful. Computational errors are worth an admonishment to be more careful in the future, but they do not merit additional time spent studying solids of revolution. The fundamental conceptual error, however, may indeed suggest to the grader that the core ideas have not been understood, and the student can be directed to revisit the topic.
Points: Good answers are carefully checked. Point-grubbing is incentivized.
Mastery: Mastery is spotted instantly. All attempts are genuine.
That the ever-growing mastery-based test is faster to grade than the static points-based test seems implausible, but experience bears it out. In a points-based system, a proposed solution has to be carefully scrutinized to determine whether it is worth 6/10 or 7/10 points. Even worse, one has to wade through the mire of 2/10 solutions as students desperately fish for points. Most questions on a mastery-based exam, by contrast, are either left blank or are graced with complete, correct solutions. Both sorts of questions can be graded instantaneously.
The Plural of Anecdote
Several of my colleagues (many of whom are helping to develop this blog) issued a student experience survey at the end of their mastery-based course. The following data reflects some highlights generated by 140 students (both majors and non-majors) across six institutions. We hope to pursue a larger, formal study, but these blips of data give us hope for the time being.
Give It a Try
Those of us developing this blog feel that the mastery-based examination is a superior form of assessment with much to offer teachers and learners of mathematics. I encourage you to read the experiences others are sharing. We are also compiling a collection of resources to help you implement the technique as painlessly as possible. Finally, if you agree with, disagree with, or wish to supplement anything I have said in this article, we would all love to hear your thoughts in the comments. This community looks forward to your continued interest.