I have been meaning write this blog post since Christmas break, but this post was always kept on the backburner. But today, I finally decided to go ahead and “eat the frog.” Today, I want to talk about something that I have been struggling with recently in some of my MBT courses which is what to do when you fall behind in a course. Here are some of the things I have done when I have gotten off-pace in a class:

- Adjust the number of concepts
- Adjust the number of exams or testing opportunities
- Adjust the mastery concepts of the course
- Streamline content in the course if you can (trim the fat)

Now, I think all of these are good ways to deal with falling behind as a teacher, but I want to warn you that making some of these adjustments may have unexpected results as well. Hopefully, my struggles will help you decide on the best course of action for your own situation if you fall behind.

**
** I consider myself to be a relatively “experienced” MBT-er. I have used mastery-based testing for four years in classes ranging from Calculus to Real Analysis. When I use mastery-based testing, I usually have 4 exams and a final with a total of 16-18 concepts. This structure has worked well for my 4-credit Calculus courses. It also worked well for my 3-credit Real Analysis course during my first two years using MBT. I was pleased with how these MBT courses went and my students were generally happy. But like all teachers, I was looking for ways to improve my courses. And the tweaks I made in my Real Analysis class over the past two years have yielded some unexpected consequences that I had to deal with in the middle of the semester.

During the summer of 2016, I went to an IBL workshop to help me incorporate more inquiry activities into my classes -specifically Real Analysis I. I have always tried to make my courses active (I use group work, ICE Sheets (In-Class-Exercises), clicker problems, and think-pair-share in my courses), but the IBL workshop challenged me to stretch my students more and I started adding more inquiry problems to my ICE-sheets. So I decided to do it and did a big overhaul with my Real Analysis class.

**Oh my gosh, I am so behind!
**Unfortunately, adding more IBL to my Real Analysis course changed the pacing. Instead of spending a day or day and a half on topics, we were spending over a week on some topics. Thus, I decided to delay our first MBT exam because we hadn’t gotten to all of the concepts I had planned to have on Exam 1. As I fell further and further off my planned schedule, it became clear we weren’t going to get to the 18 concepts I had mapped out for my class. Midway through the semester, I had to decide what to do to help with this pacing problem, so I made two quick adjustments. First I changed the number of exams from 4 to 3. Then I cut down the number of concepts from 18 to “however many we would get to.” Unfortunately, with only 3 exams, it didn’t really give my students enough time to retest, so I later had to make two more adjustments to my course. I added a testing week* between Exams 2 and 3 and changed the final exam to be a testing week as well.

**How my pacing issues begot other pacing issues
**These changes helped me give my students more chances, but they also caused a new problem with the pacing of the course. Because of the slower pace, instead of having exams placed more or less regularly throughout the semester, we ended up having Exam 2, the testing week, Exam 3, and the final testing week all within 4 weeks. It was very taxing on the students to be testing all the time. They were troopers and didn’t complain, but I could feel the extra stress. I vowed to do better next time.

This past fall, I tried to adjust and streamline some of the added inquiry activities, but I still had to delay exams and ran into similar problems. I am not sure if other MBTers have had this issue but now that it has happened twice I am going to try some of the following to help with this course in the fall.

- I could change the concepts of the course. In general, a lot of the first part of Real Analysis isn’t broken down into many concepts. A lot of the start of the class is getting students used to writing proofs and how to construct well-written arguments. I could try to come up with some new “proof-writing” concepts which would allow me to keep Exam 1 earlier in the semester, but with new concepts included.
- I could decrease the number of inquiry problems in some of the sections -especially those at the beginning. I could save some of these proofs for Real Analysis II or move the proofs to their homework to save time. This would allow me to get through more of the concepts for Exam 1.

Perhaps I will try a combination of these two options and of course if any of you wonderful readers have other suggestions, please share them in the comments! =) I plan to follow up on this topic after the fall to let you all know how it went.

**Conclusion
**One of the things I love most about teaching is that it is this great “unsolvable problem” that I always get to keep working on. This past year has thrown me some interesting situations that I hadn’t planned for which have required some adjustments to my MBT implementation. To close out this long-winded post, I will share some general advice on what I have learned from these mishaps:

**Be willing to be flexible:**I think this is an important quality for all educators. I remember David Kung, Director of Project NExT once said that even though you may be teaching the same subjects, each year you get to teach it to a new group of students which changes the dynamic and keeps it interesting. We need to be willing to make logical and fair adjustments to our plan when things don’t go as expected.**Don’t be afraid to be transparent with students:**When the pacing became very off from our Real Analysis schedule, I talked to my students about why I was adding the inquiry activities and that because of this I didn’t know exactly where we would end up at the end of the semester. When I was debating different ideas to help with the MBT pacing, I asked for their input. At the end of the semester, after their oral exams (not MBT oral exams -just regular oral exams) I gave them another chance to give feedback about the class. I shared some of what I thought didn’t go as planned or as well as I had hoped (the pacing) and asked them for suggestions. In the end, even though I felt like I was struggling more with teaching this course than past semesters, I received the highest course evaluations I had gotten in this Real Analysis. I think this was in part because I was transparent with them throughout the semester.**Buy-in is important:**I made sure to have my students understand why I was using MBT and thus, when the MBT schedule didn’t go as planned, they still understood why I was using MBT and why I was adding more IBL.**Forgive yourself:**We aren’t perfect, it is ok when things don’t go how you want. Forgive yourself, learn from the situation, and let it go. You can try to do better next time. For goodness sakes you are a mathematician, we love problem-solving!

Have you had to deal with the challenge of falling off pace? What have you done? What have you learned?

—————————————————————————————————————————————–

About the Blogger: Amanda Harsy teaches at Lewis University, a private, Catholic, Lasallian university located in Romeoville, Illinois, 35 miles southwest of Chicago. Lewis has 6,500 undergraduate and graduate students with a 34 percent minority population. Lewis is a primary teaching university and most professors teach 4 classes a semester. During her four years at Lewis, she has used mastery-based testing in Calculus II, Calculus III, Applied Linear Algebra, and Real Analysis with class sizes of 10-50 students. During this time, she has also completed a two-year study comparing MBT to traditional testing in her Calculus II courses. She is currently working on a follow-up to this study with Dr. Alyssa Hoofnagle from Wittenberg University.

* In my Calculus courses, I usually use 3-4 testing weeks between exams to give students extra opportunities to retest. Students can test any concept, but only once during a testing week. So for example, a student could retest concept 1 and 4 on Monday and concept 2 on Tuesday, but they couldn’t retest concept 1 again on Tuesday. Before this semester, I had never used testing weeks in Real Analysis.

]]>

My goal in managing the logistics has always been to make things as simple as possible for me. That might sound selfish, but it’s really more about self-preservation. Your mileage may vary when implementing any of the ideas/tools below, and that’s okay! You need to find the system that works for you and the students in your classes at your institution. Here are some general principles and ideas that have worked for me.

I learned this one the hard way. The first semester I implemented MBT in a calculus course, I started offering students the opportunity to take mastery quizzes approximately halfway through the semester. Quizzes were administered one day/week, and students were to sign up for up to two mastery objectives by the night before, so I had time to print off the correct number of copies of each objective. Thus, if only two people wanted to do objective 3, I only made two copies of a page with an objective 3 problem, and space for a solution. If no one wanted to do objective 1, I would make no copies.

There are lots of issues with this approach to logistics, but let me highlight the two that haunt me to this day:

**Sorting and copying custom quizzes is overwhelming.**My system, at least, was to print the (hopefully!) correct number of copies of each problem, write the students’ names on them, and sort by student before going to class. This was to make sure that everyone got the problems they signed up for, and ensure that we didn’t spend 10 minutes of class time making sure everyone got only the problems they’d signed up for. But this took me at least 20-30 minutes every week, and was a real pain to manage.**Not all students remembered to sign up.**Now, this was certainly the student’s fault, not mine, but it did prevent the students who forgot to sign up from showing whether or not they had learned any calculus since the last mastery opportunity. If I had extra copies of problems relevant to the students who didn’t sign up, I would sometimes offer them, but what if there were multiple students who had forgotten? Who gets the benefit despite not following the instructions?

I spoke at JMM on implementing specifications grading in abstract algebra, and one of the questions I got (which I’ve gotten in previous MBT/specs grading talks) is how well I think the system would scale to a course with *N* students in it, where *N* >> 10. One of the main concerns seem to be tracking student performance on revisions/repeated attempts at the same problem.

My answer is always that it scales really well. To assist in the task of tracking student performance on repeated mastery opportunities, I’ve created an Excel spreadsheet. The instructions are in the file, but the idea is that each assessment gets its own worksheet in which you enter numbers corresponding the students’ performance. Suppose that cell B8 corresponds to Student A’s performance on objective 1; this will be the case in every worksheet corresponding to an assessment. There is then a master worksheet that takes the maximum value of B8 over all the worksheets corresponding to the assessments. The master sheet then counts the number of times a student achieves a passing designation over all the objectives (and also counts the number of ‘high passes’, if your system has such a thing), which you can then plug into an appropriate formula to determine the student’s final grade.

Along the lines of the first suggestion, make things easier on yourself by limiting out-of-class revision opportunities. In fact, through Fall 2016, I have not allowed students any out-of-class revision opportunities. In Spring 2017, I will be allowing some in-office revision opportunities, but with a price. As a devotee of specifications grading, my students receive a limited number of tokens each semester (they usually start with 5 or 7, and can sometimes earn more by doing additional optional work). My Spring 2017 calculus students will receive 7 tokens, and can cash in two tokens for one additional mastery opportunity, taken orally in my office. The goal is to the number of such instances (I have 50 students) and to encourage the students to prepare well before coming to my office. I’ll let you know how it goes!

Anyway, these are a few of the things I’ve learned about streamlining and managing the logistical side of running an MBT course. What have you come up with?

]]>]]>

Hi everyone, Amanda Harsy and Jessica OShaughnessy here to add a few thoughts on a variation of Mastery-Based Testing. Both of us have been using Mastery-based Testing in our courses over the last two years. Jessica uses MBT in Calculus I, Calculus II, and Introductory Statistics, while Amanda uses it in her Calculus II, III, and Real Analysis courses.

We usually have 16-18 concepts in our Mastery Exams and students can add to their exam grade by mastering these concepts. Now we have a choice: should we treat all the concepts as equal? That is, can students master any of the concepts to build their grade? In some courses, it may make sense to let the students choose.

But what if you feel that not all concepts are created equally? That is, are there some concepts you really think students should have grasped after taking a class?

Both of us use a slight modification in our mastery-based grading to address this belief. For example, since Calculus II is a sequential course, there are certain concepts we think all Calculus II students should master in order to be successful in Calculus III. We want students to successfully be able to differentiate transcendental functions, calculate area, and use integration by parts. In order to enforce these concepts, we have introduced “Core Concepts.” In Amanda’s Calculus II course, for example, these concepts include differentiation and integration of transcendental functions, L’Hopital’s Rule, Advanced Integration Techniques, Calculating Area, Calculating Volumes, and Interval of Convergence for Series. Students must master these 7 concepts in order to earn at least a C for their exam grade. Similarly, Jessica breaks her Calculus I classes into 16 topics and 7 core/required concepts. The students must master these 7 before any other questions count. If they do only these required concepts, they will receive a 70% as their test grade.

We both have a love/hate relationship with core concepts, which we tell students are “required” concepts. We’ll start with the love:

- First of all, it forces students to have “mastered” important concepts they will need to understand to be successful in the subsequent course. As much as we love related rates, our students can probably manage Calculus II without fully understanding the topic. However, the chain rule is absolutely critical. Previously, they could get a C by half understanding the chain rule and half understanding related rates. This way, they know they cannot move on to calculus II without a full grasp on the chain rule (and other required topics). They are better prepared to build on these topics.
- It also helps students focus on particular topics for each exam. For our weaker students, we can direct them to these concepts to get started. We want them to focus on the calculus concepts that they really need.

The cons (we will use cons instead of hate since we think hate is really too strong):

- This method for calculating the mastery exam grade is a little more complicated than treating all concepts equally. For example, it requires students to keep working on some of the more complicated integration techniques until they master them.
- What about when a student masters 14/16 concepts, but one of the two they didn’t master is a required concept? Like limits? Should they be allowed to pass the course when they understand so much of the material? Granted, we think this is an extreme case, but we all know these extreme cases happen so it is important to think about how to handle these situations. Here is one way that Jessica has dealt with these cases.
- “Occasionally I will have students with several required topics going into the final exam. This stresses the students because they feel like they will fail the course if they do not master those required topics. Although I do not tell them, I grade these in a different way on the final. Typically, they master or do not master. However, I write up those that are “close”. If they are close but have not fully mastered a question, then I will count it toward their passing grade, but I will not add it as an extra “mastered” question. For example, 7 mastered questions is a 70% for my test. If they mastered 7 questions: 6 required concepts and 1 non-required, this means they would have 1 required concept unmastered. If they are “close” I will go ahead and give them a 70% for the test grade. I will allow up to two that are “close”. My definition of close may be a few algebraic mistakes or some minor conceptual mistakes. Students who have no idea how to do the chain rule do not obtain a ‘close.’”-Jessica

- What should we do when students don’t try the required concepts? Unfortunately, we both have had students who haven’t attempted the core concepts. Then they complain that they had 7 questions mastered and they don’t understand why they received a D in the course. It is extremely important to emphasize that some concepts are required and to spell this out carefully in the syllabus. Of course, there will always be students who do not pay attention to this, but at least we have done our best.

Overall, we both really like core concepts. This way, we can make our derivative questions hard enough that students will really show us they have mastered the concept without the ability to skip that concept because it is “too hard”. It allows students to be better prepared for their sequential courses and encourages them to review their old material. If you worry that students will skip some important ideas that won’t necessarily show up in other concepts because of the mastery-based system, this may be a good variation for you!

]]>

My name is Derek Thompson, and I’m an Assistant Professor at Taylor University in Upland, IN. Like the others on this blog, I’ve seen the transformative power of mastery-based testing. And like Mike, I’ve seen how powerful it can be when combined with specifications grading. What this post will be about is a huge mistake I nearly made – here’s what *not *to do with specifications grading and mastery-based testing.

One of the largest perks of mastery-based testing is getting the buy-in from students on day one. The idea that they can retake exams, and that they can even skip the final if they do well, is alluring and comforting. Specifications grading takes this a step further. My syllabi were carefully designed last semester to adjust one aspect of MBT and to amplify another.

First, weekly quizzes (“quests”) were the first crack at a topic, the two midterms were the second, and the final was the third. This meant that unlike the traditional MBT model, each topic was given three attempts rather than favoring earlier topics with more attempts. Second, if you look at the specifications outline below from last Spring’s Advanced Calculus course, you can see that grades *can only go up*. Mixing MBT with traditional weighted percentages for other aspects of the course kept me from making this claim previously.

There were a number of growing pains with this method in my first run in the Spring, but overall I think it was a success. The main problem was that the topics were too easy, and in that course no one even needed to take the final. The university, of course, wants us to have a final on record in case grades are contested, and likewise midterms are somewhat needed to indicate poor grades at midterm to coaches, advisors, etc.

With specifications grading, it’s easy to simply add more specifications, although you run the risk of making the syllabus too complex. When I presented the ideas at Mathfest 2016, I presented this “solved” list of specs for my Discrete Math for Computer Science course.

Do you see the problem? While it makes sense that the final can be its own grade (and you don’t see this on the chart, but it would still count for a retake of each core objective), one of my main benefits of specs grading is now a lie. Students can do all the work and tank the final, and they would receive an F in the course by this model. This generated some awkward looks in the audience, and I thank God that I gave a poorly thought-out presentation before I gave my students a poorly thought-out syllabus.

I said something in that talk that I think is succinct and profound, and illustrates why specs grading is the natural evolution of MBT. Last Fall, before I did specs grading, my students were already doing mastery proofs and redoing WebAssign problems in addition to MBT. The whole course was mastery-based – except for my syllabus. That’s important, and the syllabus I presented failed to keep that aspect from last Spring.

With that plan properly in the garbage, I’m still presented with the problem of needing students to take the midterm and the final. I’ve got a bad habit of moving by miles when I should only go inches if I’ve already got a good framework. Keeping myself in check, my solution for this semester is to keep the weekly quests but have some new topics on the two midterms, just like there would be in traditional MBT. These topics can be attempted again on the final. Here’s the corrected Discrete syllabus:

They can still opt out of the final if they’ve done well enough, but since it will only be the second attempt for the midterm topics, it’s not as likely. It guarantees a large exam grade that I can use for midterm reports. While it doesn’t allow for equal attempts at each topic, it allows me to categorize them – the heavy stuff will be on the weekly quests when it’s fresh and retested twice; the midterm topics will be lighter reminders of old material that only gets retested on the final. Of course, students will have a few “coins” (I call them “power-ups”) to do specific retakes in my office, if they need to. It’s also a nice way to hand out “extra credit” by allowing retakes for doing extra work – they’ll still need to redo the original material correctly.

You’ll also notice that I have twice as many core objectives as I had Quests for Advanced Calculus – breaking the objectives into smaller chunks will help relieve pressure and also make it more likely that they’ll have a few odds and ends to show up for on the final. It’s also consistent with the ACM standards for discrete math, which include about 40 learning objectives. Having some new ones as midterm topics helps space out this large number as well.

And you might also notice that there’s still **one** way grades can go down – unexcused absences. This is consistent with official university policy, but now it’s neatly wrapped into the specs so that students are reminded of the issue, instead of it being in a random corner of the syllabus and forgotten.

To summarize, I think that one of the most appealing aspects of MBT, and specs grading (done right) is how grades only increase. Students begin with an F, but they can never lose credit for work accomplished. That’s a huge motivator, and encourages growth mindset, grit, and all of those other learning dispositions we’ve been talking about. Feel free to comment below with questions, or send an email to drthompson@taylor.edu.

]]>

**Revisions**

As Nilson describes in her book, the all-or-nothing nature of specs grading encourages students to do professional quality work the first time. On the occasion that a student turns in non-passing work, she suggests granting a handful of revision opportunities in the form of “tokens”. I have usually granted students five tokens per semester, to be cashed in for either a 24-hour extension on anything with a due date OR a free revision of non-passing work. This gives students an opportunity to learn from their mistakes (a key component of MBT as well), but not so many opportunities that you are swamped with grading. Only once have I ever had a student do more than one revision.

One of the challenges in administering a specs grading system in a math course is that it’s not always possible for a student to determine whether or not his/her work is *correct*. Think back to the days when you were first learning to write proofs; did you oversimplify the argument? Did you attempt to use a theorem that didn’t apply? These mistakes can be difficult to spot in one’s own work (even now, if we’re being honest!). Talbert’s elegant solution was to expand from the two-level pass/fail rubric to a three level pass/progressing/fail rubric. A student earned a ‘Progressing’ designation if their work meets all specifications but contains some significant flaw in reasoning/computation. Work assessed at progressing is given one free revision (subsequent revisions on the same work must use a token).

**Assigning a final grade**

In the end, of course, we need to assign a final grade. Nilson breaks them down into two main categories: more hurdles and higher hurdles. The “more hurdles” model rewards students who pass more learning objectives with a higher grade. The higher “hurdles model” rewards students who pass more complex objectives/assignments with a higher grade.

I’ve settled on a straightforward “more hurdles” model: I essentially count the number of outcomes students have passed and assign the grade which corresponds to that number, as outlined in part on this table (where “Initial” describes student progress on learning outcomes on homework and “Secondary” describes student progress on learning outcomes on exams; note that as of this writing, these numbers are not finalized and may be subject to change):

In this way, there are no statistical/numerical calculations (which confounds our learning management system’s attempt at computing a current grade) involved in the final grade computation. See the syllabus for a more straightforward description of the system.

**Traditional MBT in a specs graded course**

You may already be seeing how easily MBT fits into a specs graded course. In a traditional MBT course using in-class exams you identify a list of skills/objectives students should pass in order to pass the course before the semester begins. Then that number is fed into a weighted average which computes a final grade.

In a specs graded course using MBT, no formula is needed: just count the number of exam problems passed by the end of the semester, and use that number (along with the amount of passing work in any other categories) to compute the final grade. I have done this in several calculus courses, and will be doing it again in calculus and linear algebra this fall. Students find that the MBT philosophy of exams fits very naturally with a specs graded course. In short, it’s not enough to get 7/10 on everything to pass; you must do quality work.

**Conclusion**

As you can hopefully see, specs grading and MBT share a lot in common; an emphasis on doing quality work and revision as a key part of the learning process are just two. I’ve found specs grading to be much more pleasing to use than traditional points-based grading, though I’m not convinced it’s right for every class. If you have any questions, feel free to sound off in the comments or send me an email.

]]>Around the same time, a friend retweeted Robert Talbert of Grand Valley State University that made reference to “specifications grading”, a system proposed by Linda Nilson (I would strongly recommend you read her book if you are thinking of implementing specs grading). With my mind already opened to the possibility of alternative grading systems, I took the plunge and read several of his blog posts, then at the Chronicle of Higher Education, and now hosted on his own site. To be clear, most of what I have learned about implementing specs grading in a math course, I have learned from reading him. But in this post and the next, I’ll do my best to explain how the system works, and how naturally MBT fits into such a course.

**Creating Learning Outcomes**

As Katie wrote for MBT, the first step (which is absolutely essential to do before the term begins) is to comb through the entire course and create a list of learning outcomes. These outcomes should be specific, and can encompass more than merely content knowledge. Here is the list of outcomes I have created for my Fall 2016 modern algebra course, and here is the syllabus which outlines the system in more detail.

**Creating Assessments**

Your next job is to determine *how* to assess student progress on the learning outcomes, and then create a set of assessments to accomplish this. This doesn’t need to be done before the term starts, though I found that creating my assessments in advance helped me refine the wording on each outcome to be more flexibly assessable.

In my modern algebra course, the assessments will be primarily homework and take-home exams, because I feel that these are the best ways to assess the learning outcomes, but in a different course, with a different set of outcomes, you may find that the use of, say, a project (or series of projects) would work better.

**Grading student work using specifications**

Here is where specs grading and MBT have a major spiritual overlap (and where the title of this post comes from): the assessments are graded pass/fail based on whether or not the students meet the desired learning outcome(s) according to a list of specifications for how the work should be done. In other words, it’s not just that there are no points or partial credit on the exams; there are no points or partial credit in the entire course.

In a specs graded course, the specifications document (modern algebra example here) should describe **exactly** what a student needs to do in order to earn a passing designation, down to the details of formatting. If at any point the specifications are not met, the student does not receive a passing mark, and may revise the work (with some limitations, which I’ll get into next time). Thus, it’s important that the specs be written in such a way that they are easily checked by the student and easily enforced by the instructor.

In modern algebra this fall, I’ve created a series of 13 homework assignments (here’s an example), each of which assesses a handful of learning outcomes using 1-3 problems per outcome. Rather than grade the assignment, I’ll grade the outcome: if a student passes all problems associated with a given outcome, they pass the outcome. If they don’t, there are limited opportunities for revision, which I’ll describe next time. Students will be required to pass some subset of the outcomes twice: on the homework assignments (which I am encouraging students to work on in groups) and on the take-home exams (which must be done alone).

On Wednesday, I’ll talk about the opportunities for revision, how I compute a final grade, and how naturally MBT fits into a specs graded course.

]]>Welcome back to the nuts and bolts of mastery-based testing. This post is part 2 about the logistics of actually implementing this assessment method in the classroom. If you haven’t read part 1, Katie does a great job explaining about how to prepare before the semester and what happens during the first week. In this post I will be talking about what happens once the semester starts: how to create the multiple tests and quizzes that are written throughout the semester, how to grade those tests and quizzes, and how the final grade is calculated. I’ll be writing about my experiences and choices in how this is implemented, with the huge caveat that there are many ways of implementing this method, and mine is simply one of them. I’m sure you’ll see many others in future posts.

**Choosing Exam Problems**

Once the mastery topics are chosen, then the individual problems must be chosen to appropriately test those topics. There are a couple of things to keep in mind when choosing the problems. First, one of the main differences in exams from a points-based approach is that you need multiple versions of each question. Therefore, you want to choose questions that can be easily modified. I have found that such questions tend to come in two forms. The first is something like a differentiation or integration question, where the question can be changed completely by a small change such as . It is generally very easy to come up with multiple versions of those questions. The other form are word problems, such as a related rates. In order to qualitatively change the question, a good deal of work is needed to come up with a new problem. For those questions I like to find two or three such questions that are qualitatively different, and then create different versions from those two or three by simply changing numbers.

I also choose slightly harder problems for my mastery-based tests. I have a couple of reasons for this. First, I don’t give a cumulative final, so once a student has mastered a topic they are never tested on it again. So I want to be sure that students who obtain mastery on a topic have truly mastered the material. Also, each test has only four new problems on it, and therefore is slightly shorter than a typical points-based test.

Finally, something to consider when writing questions is their length. Mastery questions will often have multiple parts to them, since most topics cannot be tested by one problem. However, this can lead to trouble if the question is too long, since failure on one part can lead to a loss of mastery on the whole topic. I have one example from Calculus 2 where I wrote an integration question that included both trig substitution and partial fractions. I noticed that often students would get one part right and another wrong, and therefore wouldn’t get mastery on the topic. The next semester I decided to break both of them up into two separate topics, and it worked much better. I had more total mastery topics, but the students were better able to demonstrate their ability on the different integration techniques.

**Writing Exams**

Something I didn’t realize when I started using master-based tested was that writing the tests can be considerably different from a points-based method. In a points-based method, each test written is the same length and covers distinct topics. In mastery-based testing, each test is longer than the last, covering new material in addition to different versions of problems from the previous tests. In a points-based method, the quizzes are uniform; that is, each student takes the same quiz. In mastery-based testing, I may have to create as many as 18 different quizzes for one day, as each student may wish to attempt a different problem. All of this adds up to needing a different method to create these tests and quizzes in an efficient manner.

What I like to do is create a library of problems for each mastery topic. I will create a different file for each topic and store all the versions of my problem for that topic in the file. I usually create about 3-10 problems for each topic, creating less versions for the later topics since they will be used less often. This is a good thing to do during the summer or winter break, when I have some extra free time. With this library in hand, writing the exams or quizzes becomes a matter of copy and paste, and can actually be done very quickly.

**Quizzes**

In mastery-based testing, quizzes are an opportunity for students to obtain mastery on a single topic between the exams. Since failure is not only expected on the exams but is considered beneficial for the students learning, I want to give my students as many opportunities as possible to demonstrate to me they have learned from their failure on previous tests. As the semester goes on, the exams get longer and more stressful for the students, and quizzes are an opportunity for students to demonstrate mastery on a single question. This obviously increases their grade, but it also leaves fewer questions for them to attempt on the next test, which reduces stress. For these reasons I actually find students asking for more quizzes than I have time to give.

When I give quizzes, I let each student choose which mastery questions they would like to attempt on the quiz. This only makes sense since each student will have mastered different topics on the previous exams. For conservation reasons, I don’t print out quizzes with all the questions on them. I have the students email me by the day before the test to let me know which problem they want to attempt. This is usually the biggest problem because students forget about the email, and are then unable to take the quiz. So I remind them often for about a week before the quiz to send me a quick, one-line email, and I also print out extra quizzes . Mostly, by the time the second quiz happens, all the students who want to take the quiz remember to email me. I always give the quiz at the end of the class period. This is because there are some people who will not take it, either because they have mastered all the topics up to that point, or because they are simply not prepared by the time the quiz comes around. I am fine with this, and just let the students leave early.

**Grade Distribution**

I use the following grade distribution in my mastery-based classes:

- Tests: 80%
- Homework: 20%

I choose to have the majority of the student’s grade decided by the mastery score. My main reason for doing this is because I design the exam questions such that if they show mastery of them, they understand the material of the course. Another reason is because the mastery exams are where growth mindset is emphasized in the course, I want that mindset to be emphasized for the students.

One aspect of mastery-based testing that I have found to vary widely implementers of this testing method is how the average test score is calculated. The way I calculate the test average is to subtract 5% for every question not mastered. This of course means that you can master no questions and still get a positive exam average, but since this would result in an F under any reasonable grading scale, it really doesn’t matter. However, this does make it somewhat difficult for students to understand how to calculate their grade as the semester progresses. To help with this, I provide the following table on my syllabus:

This helps the students get an idea at the beginning of the semester of how many questions they need to master to obtain the grade they want. More importantly, it helps them determine their grade during the semester. This table also allows me to easily emphasize the importance of doing homework, as the students can easily see that doing poorly on homework requires them to do very well on exams to compensate.

Hopefully these posts have given a good broad overview of how a mastery-based class could be run. As I mentioned above, this is only one way in which the class could be set up. There are really as many implementations of this method as there are instructors using it, but these basic guidelines should give a good starting place for those of you thinking about trying out mastery-based testing.

]]>

This post is part 1 of the ‘Nuts and bolts’ instructions for how to do points-free assessment with MBT. Part 1 deals with the logistics before the course begins — how should you write the syllabus and explain MBT to your students? Part 2 explains the specifics of writing exams and grading that you will be dealing with once the course starts.

Taking a mastery-based testing (MBT) approach to a course requires some advanced planning, but it is the type of planning that not only helps with the assessment but also with the long-term goals of the course. If you are nervous about trying a new assessment method like MBT or specification grading, you might consider starting it in a course you’ve taught before. I did this during the first year of my current job — I taught Calc 1 in the fall with a normal points system (100 point exams, 10 point quizzes, etc). The following semester I taught a section of the same course, but used an MBT approach to assessment.

Feeling comfortable with the content is important for laying the groundwork of mastery-based assessment because you’ll need a good idea of what it is the students should master in the course before you begin. You can think of it as writing the final exam before you write your syllabus. If you are the type of person who has already written your final for the course you’ll teach next semester — then congratulations, this will be easy for you! If instead you are like me and sometimes write the final exam during finals week, then I’m here to convince you that it’s still (mostly) easy to use MBT.

The first step is to create a list of the essential skills and topics that the course will cover. If you haven’t taught the course before, creating a list of learning outcomes is one of those recommended teaching techniques that (to be honest) I didn’t hear about until I had taught several courses on my own. [See, for example: Writing Student Learning Outcomes] In short, it is a good idea, even if you aren’t using MBT next semester. Your list might consist of both general skills like “students will engage in problem solving” and also subject-specific techniques like “students will be able to take the derivative of a composition of functions using the chain rule”. How you group the learning goals for assessment depends on the course and your own preference. For simplicity, you may initially try to limit yourself to 16 umbrella topics, and students can try to master each topic. For examples of lists like this, see the resources page.

I’ll talk more about choosing the list of skills, but now it’s time to write the syllabus!

**Syllabus explanation of MBT**

Your syllabus will look mostly the same as normal, except it should have some explanation of the mastery-testing method. Whether you put the list of topics on the syllabus is up to you — I did not put the list of topics the first time I used MBT because I was still figuring things out when the semester started. Now, however, I do try to put the list of topics on the syllabus because it helps to have the list easily accessible to students.

In the syllabus section where you would normally explain how many midterms there will be and whether the final will be cumulative, you will now have to explain how MBT works. I borrowed most of this directly from Austin’s syllabus, but this is just a starting point. In my class, I call the mid-terms “Quests”. The final exam is really just another Quest. Here is the MBT statement from my syllabus for Calc II.

*Grading Policy*

*Grades in this course will be determined by an assessment system that relies on mastery of 16 sorts of problems. For each type of problem you will have multiple attempts to demonstrate mastery. The examinations (Quests and the final exam) will all be cumulative. The first Quest will have 5 problems, the second will have 5+4=9 problems (with 5 being variants of the ones occurring on the first quest), the third will have 9+4=13, and the fourth quest and the final exam will both have 16 problems. There may also be re-attempts periodically to allow for further attempts at each type of problem.*

*I record how well you do on each problem (an M for master level, an I for intermediate level, and an A for apprentice level) on each quest. After the final, I use the highest level of performance you achieved on each sort of problem and use this to determine your course grade.*

*If at some point during the semester you have displayed mastery of each of the 16 sorts of problems, then you are guaranteed at least a B+ (homework and Maple proficiency will determine higher grades). The grade of C can be earned by demonstrating mastery of at least 12 of the types of questions. If you master at least 8 of the types of problems you will earn a D-. A more detailed grading table is given below.*

*(See Jeb’s post on Nuts and Bolts: Part 2 for an example of a grading table.)*

*This method of arriving at a course grade is unusual. There are several advantages. Each person will have several chances to display mastery of almost all of the problems. Once you have displayed mastery of a problem, there is no need to do problems like it on later exams. It is possible that if you do well on Quests you may only have one or two types of problems to do on the final exam. It is also possible that a few students will not even have to take the final exam.*

*This method stresses working out the problems in a completely correct way, since accumulating a bunch of Intermediate-level performances does not count for much. It pays to do one problem carefully and completely correct as opposed to getting four problems partially correct. Finally, this method allows you to easily see which parts of the course you are understanding and which need more attention.*

*If at any point during the semester you are uncertain about how you are doing in the class, I would be very happy to clarify.*

[Aside: Another MBT enthusiast uses something like Padawan/Jedi/Knight instead of Apprentice/Intermediate/Master. Come up with your own names, you can. Yes, hmmm.]

Writing a statement about MBT in the syllabus is a first step towards getting student buy-in. It is equally important to have a prepared summary you can give on the first day or during the first week about what MBT assessment is and why you use it.

**Selling the idea to students/first day comments**

On the first day of class, I spend about 10 minutes explaining how my version of no-points assessment works, and why I choose to determine course grades in this way. My biggest selling point to students is that I want to give them more than one try to demonstrate that they understand the course material. It is helpful to emphasize that while the standard for mastery is high, every single person in the class has the chance to succeed with MBT.

Most students have not encountered a mastery-grading system before, but they are usually excited by the idea that they may not have to show up for the final exam if they master all of the topics during the mid-term assessments. (That is, if you choose to handle your course in this way — there are other models that require a cumulative final exam regardless of mastery levels during the semester.)

When explaining the system to my students, I use the syllabus explanation as a guide, and I add in my own personal reasons for approaching assessment in this way — here are a few.

- I find points arbitrary — what does it really mean to get a 7/10 on a quiz? It is also difficult keeping a “7/10” consistent throughout the semester, which is frustrating to students.
- I think that real learning requires revisiting your previous work and addressing misconceptions. To be successful in an MBT course, you have to address (at least some of) your past mistakes.
- Completing a math problem in its entirety is an important skill — it requires persistence and focus. Removing the notion of partial credit emphasizes this skill.
- Giving students the list of topics and skills at the start of the semester allows them to see a path to success in the course. It also lays out exactly what the course goals are. This can help to remove some of the mystery surrounding the class and it also may help with math anxiety.

When presenting the MBT system in the first week, I try to be relentlessly positive. Usually it goes over well. Some students may wish to leave the class because it is different from what they are used to, and that’s ok too! On the whole, enrollment in my classes has been steady since I’ve started to use mastery grading.

**Incorporating other graded components in the course **

It is easy to incorporate other components of the course that are not appropriate for an in-class assessment — for example, practice homework problems, projects, oral presentations, Maple projects, and so on. You can choose to grade these other components in the mastery style (M/I/A or K/J/P) or you can revert to a standard points system for these components. I’ll demonstrate with an example of how I incorporated homework into Calc I and II.

I used WebAssign (an online homework system) for daily homework in Calculus I and II. I gave students repeated attempts on the homework, just as I did on exams. In the case of homework, 6 attempts was the cutoff. In the final course grade, homework counted the same as two of the “mastery topics” — if a student got at least 70% of the homework problems correct (eventually), I treated it as one mastery. If they got at least 90% of the homework problems correct, I treated it as two mastery. Therefore, my 16 “topics” grew to 18 in the final course grade, and two of them aren’t really topics at all, they are dependant on homework. There are other ways to incorporate homework or presentations — see the resources page for more examples.

**Deciding on the list of skills to master**

I recommend drafting a final exam that you would give in a standard version of the course, and determining the skills list based on that exam. It’s a great way to see what you are hoping that students accomplish. Some problems may require multiple different skills, so those sorts of problem may become two different topics. Others might not merit an entire topic on their own, but they might be easily grouped with another type of problem. For example, I had a topic in Calc II called “Applications of Integration to Arc Length, Surface Area, or Probability”. That topic usually had a question with two unrelated parts — but both parts used integration to solve a problem.

It’s a process of trial and error, for sure! I have edited my list of topics from past classes, after discovering that some topics shouldn’t be given equal weight in the final course grade.

Jeb’s post continues with an in-depth view of choosing exam problems, dealing with quizzes, and grading.

]]>Welcome to the inaugural post on the Mastery-Based Testing Blog. We are all excited to share an assessment technique that many of us find to be superior (at least one of us self-evidently so) to traditional exams. (I should credit George McNulty at the University of South Carolina for introducing me to the idea by way of subjecting me to the exams as a graduate student.) What follows is an essay form of a short talk I’ve given at a few national conferences. My aim here is to give a brief overview of the method (others will speak in more detail about specific implementations) and share several aspects in which you and your students may find it preferable to traditional points-based exams.

A brief word about the title: It was intentionally chosen to be half facetious and half sincere. On the one hand, I lack the evidence to make a statistically significant case that this method promotes student gains, so my defense is to shame the reader for demanding such evidence; the claim stands on its own. (This is where you struggle to gratify me with brief chuckle.) More seriously, I think the aspects in which the method shines are varied and even somewhat nebulous, as I hope to demonstrate. To choose a single metric like improved grades or increased retention is, I claim, to miss the point somewhat. Large-scale studies on best practices are certainly worthwhile, but the end result is a claim about effectiveness that has been averaged across many different institutional missions, instructional strategies, student bodies, etc. As you read my thoughts on mastery-based exams, consider the impact you think its implementation would have at *your* institution, in *your* courses, and with *your* students. I hope that thinking about your particular details will help make it “self-evident” to you, at least, that the idea is worth exploring further.

**What Is It?**

As I mentioned in the introduction, my aim is to give only the barest sense of the method. My colleagues will provide more details on specific implementations. Among all the implementations, however, there appear to be three essential characteristics.

- Clear content objectives
- Credit only for eventual mastery
- Multiple attempts with complete forgiveness

*Clear Content Objectives*

In my study guides, I partition course content into roughly sixteen big ideas, each consisting of a few related subtopics. For example, under the heading of “Limits” in multivariable calculus, my students would find:

- Prove a limit does not exist by exhibiting two paths of approach with different limiting behavior.
- Prove a limit exists by appealing to continuity.
- Prove a limit exists using the Squeeze Theorem.

I feel that a student needs to demonstrate proficiency in all these situations before claiming to be a “master” of the concept of a multivariable limit. I accomplish this by including a single question on limits consisting of three parts, one for each item. There is no hint as to which part requires which technique, since selecting an appropriate tool for a new problem is itself a skill I want to develop.

*Credit Only For Eventual Mastery*

There is no partial credit in a mastery-based exam; a question is either mastered or it is not. What does it mean to master a question? One of the refreshing aspects of the method is that “mastery” is at the discretion of the instructor. My standard tends to center around a single question:

Will the student benefit from studying the objective again?

In the multivariable limit example, consider a squeeze theorem proof that goes off the rails early due to some absent-minded factoring error, but otherwise correctly applies the theorem. In my opinion, this problem is mastered; the student has demonstrated proficiency with the intended idea. Contrast this with a proof where the student divines the correct limit, but does not justify the bounds used in the proof. This student would most likely benefit from taking a second look at how inequalities allow the squeeze theorem to accomplish its goal.

*Multiple Attempts With Complete Forgiveness*

To allow students time to incorporate instructor feedback and progress toward mastery, it is important to allow multiple attempts on each big idea. Moreover, to emphasize our desire for eventual (rather than immediate) mastery, I believe it is crucial that previous failed attempts carry no penalty. There are many creative ways of accomplishing these goals, but my exam structure typically resembles the following:

- Test 1: Objectives 1 – 4
- Test 2: Objectives 1 – 8
- Test 3: Objectives 1 – 12
- Test 4: Objectives 1 – 16
- Final Exam: Objectives 1 – 16

(For Tests 3 and 4, I often split the old questions from the new and hold the test over two days to give students more time to work.) So, for example, a question addressing Objective 1 appears on every exam. The versions appearing on each exam are different enough from one another that rote memorization is no help, but they are similar enough that they are clearly addressing the same objective. Under this exam structure, a student could fail to master Objective 1 four times without penalty, only to display true mastery on the final exam and earn full credit. At the other extreme, a student may display mastery on the first attempt and simply ignore the alternate versions appearing on subsequent exams. *A student need only display mastery one time to earn credit for the objective. *(I emphasize this point to stress that no student is attempting to solve sixteen calculus problems in an hour. Each student approaches an exam with a personal list of 3 – 5 objectives they hope to master on a given attempt.)

**Armchair Pedagogy**

It is an unfortunate reality that many of our students will not or cannot devote as much time as we might like to mathematical study. How does the structure of our assessment impact the way in which these precious hours are used? In the following sections, I hope to demonstrate some important ways in which I feel a mastery-based exam surpasses a traditional points-based exam.

*Depth of Knowledge*

Points: Understand everything superficially.

Mastery: Understand some concepts in great detail.

In most points-based systems, a blank exam question is a heavy blow to a student’s grade. On the other hand, a student who provides a couple relevant formulas and something resembling the beginning of a solution may receive half credit or more. In the presence of constrained study time, a good strategy is to learn some basics about every test item. Such a student may earn half credit on most items together with a few lucky shots on easier items, which amounts to a passing grade overall. Take a moment to consider whether this experience has adequately prepared the student to apply mathematical thinking to nontrivial problems in the future.

The “broad and superficial” strategy employed above earns no credit under a mastery-based system. Instead, a student who wishes to earn a passing exam grade must *fully *understand an appreciable subset of the main ideas of the course, and a student wishing to earn an A grade must *fully *understand most or all of the main ideas of the course. Even if students spend no time studying a particular item, I contend that the experience of pursuing deep understanding on the other items leaves them in a stronger position to engage deeply with the troublesome topic when it is needed in the future. Moreover, depth of understanding is critical to one’s ability to apply existing mathematical knowledge in novel domains.

*Meaningful Office Hours*

Points: “How can I easily pick up some points?”

Mastery: “What should I study to fully understand this concept?”

Making grades a function of understanding rather than accumulated trivia lends itself to more meaningful discussions during office hours. A student seeking only to gain more points asks superficial questions with easily-memorized answers (items that are readily available in the textbook). A student seeking mastery must be willing to reflect upon their partial understanding, ask pertinent questions that address their current misunderstanding, and synthesize the conversation into a holistic approach to the concept.

*Perseverance
*Points: Try a problem once (maybe twice) and hope for the best.

Mastery: Keep trying until you succeed (and I know you can).

One might make the case (and I frequently do) that even students who are far removed from science will benefit from mathematical study because it is exceptionally effective in training students to persevere in solving complex problems. Points-based assessment undermines this valuable experience; a student can often obtain a passing grade without working even a single problem to completion. Indeed, even those who might take a second look at a challenging exam problem may not have incentive to do so if it does not appear on the final. To contrast, a mastery-based exam requires that a student display satisfactory depth of understanding to receive any credit, and this may take several attempts. It falls on the instructor to engender a classroom atmosphere in which these multiple attempts are seen not as shortcomings, but rather as the very essence of deep learning.

*Timely Review*

Points: Review key concepts for the final (maybe).

Mastery: Review key concepts now.

A points-based system can provide a perverse incentive to ignore key early concepts. As an example, I ask students to provide the vector equations of lines and planes to help develop their intuition about the geometry of vector operations. Under a typical points based-system, a student does not directly benefit from revisiting these foundational concepts. Rather, they are encouraged to press on toward of superficial understanding of applications of that concept, which is both counterproductive and meaningless.

A mastery-based system gives students an immediate reason to revisit important concepts early in the semester, since they will have an opportunity to master it on the very next exam. Continuing with the same example, I did, in fact, recently work with a student who demonstrated poor understanding of vector geometry on the first test only to master the concept on the second. After receiving his graded test, he remarked that studying for that introductory question helped him better understand the new content. It seems understanding *vectors* is essential to understanding *vector* calculus.

*Growth-Mindedness*

Points: Failure is undesirable and incurs penalties.

Mastery: Failure is an opportunity to improve understanding.

The effect of testing on growth-mindedness is, in my estimation, one of the most important facets of this discussion. A points-based system sets arbitrary deadlines by which time perfection must be attained or else penalties are incurred. Each helpful remark from the instructor is coupled with the sting of a progressively lower grade; the more helpful the remark, the greater the deduction. A mastery-based grader can include plenty of penalty-free insight to help the student improve their understanding. Such feedback is actively desired by the student and is sure to be studied since subsequent exams will provide opportunities to earn the missing credit. The message of the mastery-based exam is well-aligned with the development of a growth mindset: “You don’t understand this concept *yet*, so here’s some advice on how to improve. Come back next time and show me what you’ve learned.”

*Reduced Test Anxiety*

Points: Every test has the potential to “ruin” GPA.

Mastery: No one test can harm grade.

Among all the items on this list, this one seems to vary most wildly across departments. My students frequently tell me that being able to try questions multiple times with no penalty considerably reduces their test anxiety. Some of my colleagues, on the other hand, report student frustration when they receive feedback on several near misses (which receive no credit) on their exam. There is no panacea for student discontent, but my advice is to hear their complaints while frequently (and gently) insisting that you think this obscure testing scheme is better for them in the long run. I also suggest you keep an eye on your gradebook. If a student is making unsatisfactory progress, consider a short office meeting where you and the student map out a plan of study for the rest of the semester. While I still contend that the general student reception of mastery-based testing is positive, a student with only one problem mastered at midterm is likely to be feeling quite hopeless. A short pep talk and a realistic plan to climb out of the pit will go a long way.

*Formative Assessment*

Points: How many points is this error worth?

Mastery: Will the student benefit from studying the concept again?

Points-based grading is inherently punitive; one must examine a proposed solution looking for opportunities to deduct credit. Moreover, the resulting feedback may not be terribly meaningful. Imagine two students computing the volume of a solid of revolution. One arrives at the correct integral but makes several errors in its evaluation. The other begins with the wrong integral but evaluates it perfectly. A points-based grader must decide how many points to deduct for each of these very different sorts of errors. On a mastery-based exam, one rather asks whether the student will benefit from additional study. Here the answer is clear and meaningful. Computational errors are worth an admonishment to be more careful in the future, but they do not merit additional time spent studying solids of revolution. The fundamental conceptual error, however, may indeed suggest to the grader that the core ideas have not been understood, and the student can be directed to revisit the topic.

*Faster Grading*

Points: Good answers are carefully checked. Point-grubbing is incentivized.

Mastery: Mastery is spotted instantly. All attempts are genuine.

That the ever-growing mastery-based test is faster to grade than the static points-based test seems implausible, but experience bears it out. In a points-based system, a proposed solution has to be carefully scrutinized to determine whether it is worth 6/10 or 7/10 points. Even worse, one has to wade through the mire of 2/10 solutions as students desperately fish for points. Most questions on a mastery-based exam, by contrast, are either left blank or are graced with complete, correct solutions. Both sorts of questions can be graded instantaneously.

**The Plural of Anecdote**

Several of my colleagues (many of whom are helping to develop this blog) issued a student experience survey at the end of their mastery-based course. The following data reflects some highlights generated by 140 students (both majors and non-majors) across six institutions. We hope to pursue a larger, formal study, but these blips of data give us hope for the time being.

- The exams deepened my understanding of the ideas in this course. (80% agreement)
- The results of my exams accurately reflect my knowledge. (77% agreement)
- I feel prepared to approach a wide range of problems in this course. (75% agreement)
- I was anxious before the exams in this course. (36% disagreement)

**Give It a Try**

Those of us developing this blog feel that the mastery-based examination is a superior form of assessment with much to offer teachers and learners of mathematics. I encourage you to read the experiences others are sharing. We are also compiling a collection of resources to help you implement the technique as painlessly as possible. Finally, if you agree with, disagree with, or wish to supplement anything I have said in this article, we would all love to hear your thoughts in the comments. This community looks forward to your continued interest.

]]>