Make Math Manageable with a Mastery-Based Method

Here is a little article about my experiences with mastery-based testing and how I have implemented this assessment method at @LewisUniversity
My profile has sample assessments, reviews, lecture notes, and homeworks too!

Advertisements

What to do when you fall behind the expected pace in your MBT class

Our blog has been rather silent of late, in part because we have been working on a paper which we submitted this past summer. This paper outlines what mastery-based testing is and shares a bit of the data we collected from our first few years of implementing MBT into our classes.

I have been meaning write this blog post since Christmas break, but this post was always kept on the backburner. But today, I finally decided to go ahead and “eat the frog.” Today, I want to talk about something that I have been struggling with recently in some of my MBT courses which is what to do when you fall behind in a course. Here are some of the things I have done when I have gotten off-pace in a class:

  • Adjust the number of concepts
  • Adjust the number of exams or testing opportunities
  • Adjust the mastery concepts of the course
  • Streamline content in the course if you can (trim the fat)

Now, I think all of these are good ways to deal with falling behind as a teacher, but I want to warn you that making some of these adjustments may have unexpected results as well. Hopefully, my struggles will help you decide on the best course of action for your own situation if you fall behind.

 I consider myself to be a relatively “experienced” MBT-er. I have used mastery-based testing for four years in classes ranging from Calculus to Real Analysis. When I use mastery-based testing, I usually have 4 exams and a final with a total of 16-18 concepts. This structure has worked well for my 4-credit Calculus courses. It also worked well for my 3-credit Real Analysis course during my first two years using MBT. I was pleased with how these MBT courses went and my students were generally happy. But like all teachers, I was looking for ways to improve my courses. And the tweaks I made in my Real Analysis class over the past two years have yielded some unexpected consequences that I had to deal with in the middle of the semester.

During the summer of 2016, I went to an IBL workshop to help me incorporate more inquiry activities into my classes -specifically Real Analysis I. I have always tried to make my courses active (I use group work, ICE Sheets (In-Class-Exercises), clicker problems, and think-pair-share in my courses), but the IBL workshop challenged me to stretch my students more and I started adding more inquiry problems to my ICE-sheets. So I decided to do it and did a big overhaul with my Real Analysis class.

Oh my gosh, I am so behind!
Unfortunately, adding more IBL to my Real Analysis course changed the pacing. Instead of spending a day or day and a half on topics, we were spending over a week on some topics. Thus, I decided to delay our first MBT exam because we hadn’t gotten to all of the concepts I had planned to have on Exam 1. As I fell further and further off my planned schedule, it became clear we weren’t going to get to the 18 concepts I had mapped out for my class. Midway through the semester, I had to decide what to do to help with this pacing problem, so I made two quick adjustments. First I changed the number of exams from 4 to 3. Then I cut down the number of concepts from 18 to “however many we would get to.” Unfortunately, with only 3 exams, it didn’t really give my students enough time to retest, so I later had to make two more adjustments to my course. I added a testing week* between Exams 2 and 3 and changed the final exam to be a testing week as well.

How my pacing issues begot other pacing issues
These changes helped me give my students more chances, but they also caused a new problem with the pacing of the course. Because of the slower pace, instead of having exams placed more or less regularly throughout the semester, we ended up having Exam 2, the testing week, Exam 3, and the final testing week all within 4 weeks. It was very taxing on the students to be testing all the time. They were troopers and didn’t complain, but I could feel the extra stress. I vowed to do better next time.

This past fall, I tried to adjust and streamline some of the added inquiry activities, but I still had to delay exams and ran into similar problems. I am not sure if other MBTers have had this issue but now that it has happened twice I am going to try some of the following to help with this course in the fall.

  • I could change the concepts of the course. In general, a lot of the first part of Real Analysis isn’t broken down into many concepts. A lot of the start of the class is getting students used to writing proofs and how to construct well-written arguments. I could try to come up with some new “proof-writing” concepts which would allow me to keep Exam 1 earlier in the semester, but with new concepts included.
  • I could decrease the number of inquiry problems in some of the sections -especially those at the beginning. I could save some of these proofs for Real Analysis II or move the proofs to their homework to save time. This would allow me to get through more of the concepts for Exam 1.

Perhaps I will try a combination of these two options and of course if any of you wonderful readers have other suggestions, please share them in the comments! =) I plan to follow up on this topic after the fall to let you all know how it went.

Conclusion
One of the things I love most about teaching is that it is this great “unsolvable problem” that I always get to keep working on. This past year has thrown me some interesting situations that I hadn’t planned for which have required some adjustments to my MBT implementation. To close out this long-winded post, I will share some general advice on what I have learned from these mishaps:

  • Be willing to be flexible: I think this is an important quality for all educators. I remember David Kung, Director of Project NExT once said that even though you may be teaching the same subjects, each year you get to teach it to a new group of students which changes the dynamic and keeps it interesting. We need to be willing to make logical and fair adjustments to our plan when things don’t go as expected.
  • Don’t be afraid to be transparent with students: When the pacing became very off from our Real Analysis schedule, I talked to my students about why I was adding the inquiry activities and that because of this I didn’t know exactly where we would end up at the end of the semester. When I was debating different ideas to help with the MBT pacing, I asked for their input. At the end of the semester, after their oral exams (not MBT oral exams -just regular oral exams) I gave them another chance to give feedback about the class. I shared some of what I thought didn’t go as planned or as well as I had hoped (the pacing) and asked them for suggestions. In the end, even though I felt like I was struggling more with teaching this course than past semesters, I received the highest course evaluations I had gotten in this Real Analysis. I think this was in part because I was transparent with them throughout the semester.
  • Buy-in is important: I made sure to have my students understand why I was using MBT and thus, when the MBT schedule didn’t go as planned, they still understood why I was using MBT and why I was adding more IBL.
  • Forgive yourself: We aren’t perfect, it is ok when things don’t go how you want. Forgive yourself, learn from the situation, and let it go. You can try to do better next time. For goodness sakes you are a mathematician, we love problem-solving!

Have you had to deal with the challenge of falling off pace? What have you done? What have you learned?

—————————————————————————————————————————————–

About the Blogger: Amanda Harsy teaches at Lewis University, a private, Catholic, Lasallian university located in Romeoville, Illinois, 35 miles southwest of Chicago. Lewis has 6,500 undergraduate and graduate students with a 34 percent minority population. Lewis is a primary teaching university and most professors teach 4 classes a semester. During her four years at Lewis, she has used mastery-based testing in Calculus II, Calculus III, Applied Linear Algebra, and Real Analysis with class sizes of 10-50 students.  During this time, she has also completed a two-year study comparing MBT to traditional testing in her Calculus II courses. She is currently working on a follow-up to this study with Dr. Alyssa Hoofnagle from Wittenberg University.

* In my Calculus courses, I usually use 3-4 testing weeks between exams to give students extra opportunities to retest. Students can test any concept, but only once during a testing week. So for example, a student could retest concept 1 and 4 on Monday and concept 2 on Tuesday, but they couldn’t retest concept 1 again on Tuesday. Before this semester, I had never used testing weeks in Real Analysis.

 

Streamlining the logistics of MBT

Happy 2017! I escaped the snow and ice of the Atlanta Joint Mathematics Meetings for the wind and cold (and snow) of northwest Iowa. While at JMM, I had a nice discussion with a few Gold ’14 dots about their implementations of MBT, and as a result, I decided to write a post about how I manage the logistical side of things in my MBT (and often specifications graded) courses, and share some of the resources I’ve developed.

My goal in managing the logistics has always been to make things as simple as possible for me. That might sound selfish, but it’s really more about self-preservation. Your mileage may vary when implementing any of the ideas/tools below, and that’s okay! You need to find the system that works for you and the students in your classes at your institution. Here are some general principles and ideas that have worked for me.

Continue reading “Streamlining the logistics of MBT”

Mastery-Based Testing with Core Concepts

 

Hi everyone, Amanda Harsy and Jessica OShaughnessy here to add a few thoughts on a variation of Mastery-Based Testing. Both of us have been using Mastery-based Testing in our courses over the last two years. Jessica uses MBT in Calculus I, Calculus II, and Introductory Statistics, while Amanda uses it in her Calculus II, III, and Real Analysis courses.

We usually have 16-18 concepts in our Mastery Exams and students can add to their exam grade by mastering these concepts.  Now we have a choice: should we treat all the concepts as equal? That is, can students master any of the concepts to build their grade? In some courses, it may make sense to let the students choose.

But what if you feel that not all concepts are created equally? That is, are there some concepts you really think students should have grasped after taking a class?

Both of us use a slight modification in our mastery-based grading to address this belief. For example, since Calculus II is a sequential course, there are certain concepts we think all Calculus II students should master in order to be successful in Calculus III. We want students to successfully be able to differentiate transcendental functions, calculate area, and use integration by parts. In order to enforce these concepts, we have introduced “Core Concepts.” In Amanda’s  Calculus II course, for example, these concepts include differentiation and integration of transcendental functions, L’Hopital’s Rule, Advanced Integration Techniques, Calculating Area, Calculating Volumes, and Interval of Convergence for Series.  Students must master these 7 concepts in order to earn at least a C for their exam grade. Similarly, Jessica breaks her Calculus I classes into 16 topics and 7 core/required concepts. The students must master these 7 before any other questions count. If they do only these required concepts, they will receive a 70% as their test grade.

We both have a love/hate relationship with core concepts, which we tell students are “required” concepts. We’ll start with the love:

  • First of all, it forces students to have “mastered” important concepts they will need to understand to be successful in the subsequent course. As much as we love related rates, our students can probably manage Calculus II without fully understanding the topic. However, the chain rule is absolutely critical. Previously, they could get a C by half understanding the chain rule and half understanding related rates. This way, they know they cannot move on to calculus II without a full grasp on the chain rule (and other required topics). They are better prepared to build on these topics.
  • It also helps students focus on particular topics for each exam. For our weaker students, we can direct them to these concepts to get started. We want them to focus on the calculus concepts that they really need.

The cons (we will use cons instead of hate since we think hate is really too strong):

  • This method for calculating the mastery exam grade is a little more complicated than treating all concepts equally.  For example, it requires students to keep working on some of the more complicated integration techniques until they master them.
  • What about when a student masters 14/16 concepts, but one of the two they didn’t master is a required concept? Like limits? Should they be allowed to pass the course when they understand so much of the material? Granted, we think this is an extreme case, but we all know these extreme cases happen so it is important to think about how to handle these situations.  Here is one way that Jessica has dealt with these cases.
    • “Occasionally I will have students with several required topics going into the final exam. This stresses the students because they feel like they will fail the course if they do not master those required topics. Although I do not tell them, I grade these in a different way on the final. Typically, they master or do not master. However, I write up those that are “close”. If they are close but have not fully mastered a question, then I will count it toward their passing grade, but I will not add it as an extra “mastered” question. For example, 7 mastered questions is a 70% for my test. If they mastered 7 questions: 6 required concepts and 1 non-required, this means they would have 1 required concept unmastered. If they are “close” I will go ahead and give them a 70% for the test grade.  I will allow up to two that are “close”. My definition of close may be a few algebraic mistakes or some minor conceptual mistakes. Students who have no idea how to do the chain rule do not obtain a ‘close.’”-Jessica
  • What should we do when students don’t try the required concepts? Unfortunately, we both have had students who haven’t attempted the core concepts. Then they complain that they had 7 questions mastered and they don’t understand why they received a D in the course. It is extremely important to emphasize that some concepts are required and to spell this out carefully in the syllabus. Of course, there will always be students who do not pay attention to this, but at least we have done our best.

Overall, we both really like core concepts. This way, we can make our derivative questions hard enough that students will really show us they have mastered the concept without the ability to skip that concept because it is “too hard”. It allows students to be better prepared for their sequential courses and encourages them to review their old material. If you worry that students will skip some important ideas that won’t necessarily show up in other concepts because of the mastery-based system, this may be a good variation for you!

Cranking MBT up to 11 with Specifications Grading (Part III)

Hello! I’m not Mike!

 

My name is Derek Thompson, and I’m an Assistant Professor at Taylor University in Upland, IN. Like the others on this blog, I’ve seen the transformative power of mastery-based testing. And like Mike, I’ve seen how powerful it can be when combined with specifications grading. What this post will be about is a huge mistake I nearly made – here’s what not to do with specifications grading and mastery-based testing.

 

One of the largest perks of mastery-based testing is getting the buy-in from students on day one. The idea that they can retake exams, and that they can even skip the final if they do well, is alluring and comforting. Specifications grading takes this a step further. My syllabi were carefully designed last semester to adjust one aspect of MBT and to amplify another.

First, weekly quizzes (“quests”) were the first crack at a topic, the two midterms were the second, and the final was the third. This meant that unlike the traditional MBT model, each topic was given three attempts rather than favoring earlier topics with more attempts. Second, if you look at the specifications outline below from last Spring’s Advanced Calculus course, you can see that grades can only go up. Mixing MBT with traditional weighted percentages for other aspects of the course kept me from making this claim previously.

specs1

There were a number of growing pains with this method in my first run in the Spring, but overall I think it was a success. The main problem was that the topics were too easy, and in that course no one even needed to take the final. The university, of course, wants us to have a final on record in case grades are contested, and likewise midterms are somewhat needed to indicate poor grades at midterm to coaches, advisors, etc.

With specifications grading, it’s easy to simply add more specifications, although you run the risk of making the syllabus too complex. When I presented the ideas at Mathfest 2016, I presented this “solved” list of specs for my Discrete Math for Computer Science course.

specs2

Do you see the problem? While it makes sense that the final can be its own grade (and you don’t see this on the chart, but it  would still count for a retake of each core objective), one of my main benefits of specs grading is now a lie. Students can do all the work and tank the final, and they would receive an F in the course by this model. This generated some awkward looks in the audience, and I thank God that I gave a poorly thought-out presentation before I gave my students a poorly thought-out syllabus.

I said something in that talk that I think is succinct and profound, and illustrates why specs grading is the natural evolution of MBT. Last Fall, before I did specs grading, my students were already doing mastery proofs and redoing WebAssign problems in addition to MBT. The whole course was mastery-based – except for my syllabus. That’s important, and the syllabus I presented failed to keep that aspect from last Spring.

With that plan properly in the garbage, I’m still presented with the problem of needing students to take the midterm and the final. I’ve got a bad habit of moving by miles when I should only go inches if I’ve already got a good framework. Keeping myself in check, my solution for this semester is to keep the weekly quests but have some new topics on the two midterms, just like there would be in traditional MBT. These topics can be attempted again on the final. Here’s the corrected Discrete syllabus:

specs3

They can still opt out of the final if they’ve done well enough, but since it will only be the second attempt for the midterm topics, it’s not as likely. It guarantees a large exam grade that I can use for midterm reports. While it doesn’t allow for equal attempts at each topic, it allows me to categorize them – the heavy stuff will be on the weekly quests when it’s fresh and retested twice; the midterm topics will be lighter reminders of old material that only gets retested on the final. Of course, students will have a few “coins” (I call them “power-ups”) to do specific retakes in my office, if they need to. It’s also a nice way to hand out “extra credit” by allowing retakes for doing extra work – they’ll still need to redo the original material correctly.

You’ll also notice that I have twice as many core objectives as I had Quests for Advanced Calculus – breaking the objectives into smaller chunks will help relieve pressure and also make it more likely that they’ll have a few odds and ends to show up for on the final. It’s also consistent with the ACM standards for discrete math, which include about 40 learning objectives. Having some new ones as midterm topics helps space out this large number as well.

And you might also notice that there’s still one way grades can go down – unexcused absences. This is consistent with official university policy, but now it’s neatly wrapped into the specs so that students are reminded of the issue, instead of it being in a random corner of the syllabus and forgotten.

To summarize, I think that one of the most appealing aspects of MBT, and specs grading (done right) is how grades only increase. Students begin with an F, but they can never lose credit for work accomplished. That’s a huge motivator, and encourages growth mindset, grit, and all of those other learning dispositions we’ve been talking about. Feel free to comment below with questions, or send an email to drthompson@taylor.edu.

Cranking MBT up to 11 with Specifications Grading (Part I)

I first implemented MBT in a Calculus I course in the spring of 2015 and realized, as Austin writes, that MBT is “self-evidently better” than traditional, points-and-partial-credit-based testing. I’ve used some form of MBT in every course since then.

Around the same time, a friend retweeted Robert Talbert of Grand Valley State University that made reference to “specifications grading”, a system proposed by Linda Nilson (I would strongly recommend you read her book if you are thinking of implementing specs grading). With my mind already opened to the possibility of alternative grading systems, I took the plunge and read several of his blog posts, then at the Chronicle of Higher Education, and now hosted on his own site. To be clear, most of what I have learned about implementing specs grading in a math course, I have learned from reading him. But in this post and the next, I’ll do my best to explain how the system works, and how naturally MBT fits into such a course.

Continue reading “Cranking MBT up to 11 with Specifications Grading (Part I)”

Nuts and bolts, part 2

This post was written by Jeb Collins

Welcome back to the nuts and bolts of mastery-based testing. This post is part 2 about the logistics of actually implementing this assessment method in the classroom. If you haven’t read part 1, Katie does a great job explaining about how to prepare before the semester and what happens during the first week. In this post I will be talking about what happens once the semester starts: how to create the multiple tests and quizzes that are written throughout the semester, how to grade those tests and quizzes, and how the final grade is calculated. I’ll be writing about my experiences and choices in how this is implemented, with the huge caveat that there are many ways of implementing this method, and mine is simply one of them. I’m sure you’ll see many others in future posts.

Choosing Exam Problems

Once the mastery topics are chosen, then the individual problems must be chosen to appropriately test those topics. There are a couple of things to keep in mind when choosing the problems. First, one of the main differences in exams from a points-based approach is that you need multiple versions of each question. Therefore, you want to choose questions that can be easily modified. I have found that such questions tend to come in two forms. The first is something like a differentiation or integration question, where the question can be changed completely by a small change such as x^2 \to \sin(x). It is generally very easy to come up with multiple versions of those questions. The other form are word problems, such as a related rates. In order to qualitatively change the question, a good deal of work is needed to come up with a new problem. For those questions I like to find two or three such questions that are qualitatively different, and then create different versions from those two or three by simply changing numbers.

I also choose slightly harder problems for my mastery-based tests. I have a couple of reasons for this. First, I don’t give a cumulative final, so once a student has mastered a topic they are never tested on it again. So I want to be sure that students who obtain mastery on a topic have truly mastered the material. Also, each test has only four new problems on it, and therefore is slightly shorter than a typical points-based test.

Finally, something to consider when writing questions is their length. Mastery questions will often have multiple parts to them, since most topics cannot be tested by one problem. However, this can lead to trouble if the question is too long, since failure on one part can lead to a loss of mastery on the whole topic. I have one example from Calculus 2 where I wrote an integration question that included both trig substitution and partial fractions. I noticed that often students would get one part right and another wrong, and therefore wouldn’t get mastery on the topic. The next semester I decided to break both of them up into two separate topics, and it worked much better. I had more total mastery topics, but the students were better able to demonstrate their ability on the different integration techniques.

Writing Exams

Something I didn’t realize when I started using master-based tested was that writing the tests can be considerably different from a points-based method. In a points-based method, each test written is the same length and covers distinct topics. In mastery-based testing, each test is longer than the last, covering new material in addition to different versions of problems from the previous tests. In a points-based method, the quizzes are uniform; that is, each student takes the same quiz. In mastery-based testing, I may have to create as many as 18 different quizzes for one day, as each student may wish to attempt a different problem. All of this adds up to needing a different method to create these tests and quizzes in an efficient manner.

What I like to do is create a library of problems for each mastery topic. I will create a different file for each topic and store all the versions of my problem for that topic in the file. I usually create about 3-10 problems for each topic, creating less versions for the later topics since they will be used less often. This is a good thing to do during the summer or winter break, when I have some extra free time. With this library in hand, writing the exams or quizzes becomes a matter of copy and paste, and can actually be done very quickly.

Quizzes

In mastery-based testing, quizzes are an opportunity for students to obtain mastery on a single topic between the exams. Since failure is not only expected on the exams but is considered beneficial for the students learning, I want to give my students as many opportunities as possible to demonstrate to me they have learned from their failure on previous tests. As the semester goes on, the exams get longer and more stressful for the students, and quizzes are an opportunity for students to demonstrate mastery on a single question. This obviously increases their grade, but it also leaves fewer questions for them to attempt on the next test, which reduces stress. For these reasons I actually find students asking for more quizzes than I have time to give.

When I give quizzes, I let each student choose which mastery questions they would like to attempt on the quiz. This only makes sense since each student will have mastered different topics on the previous exams. For conservation reasons, I don’t print out quizzes with all the questions on them. I have the students email me by the day before the test to let me know which problem they want to attempt. This is usually the biggest problem because students forget about the email, and are then unable to take the quiz. So I remind them often for about a week before the quiz to send me a quick, one-line email, and I also print out extra quizzes . Mostly, by the time the second quiz happens, all the students who want to take the quiz remember to email me. I always give the quiz at the end of the class period. This is because there are some people who will not take it, either because they have mastered all the topics up to that point, or because they are simply not prepared by the time the quiz comes around. I am fine with this, and just let the students leave early.

Grade Distribution

I use the following grade distribution in my mastery-based classes:

  • Tests: 80%
  • Homework: 20%

I choose to have the majority of the student’s grade decided by the mastery score. My main reason for doing this is because I design the exam questions such that if they show mastery of them, they understand the material of the course. Another reason is because the mastery exams are where growth mindset is emphasized in the course, I want that mindset to be emphasized for the students.

One aspect of mastery-based testing that I have found to vary widely implementers of this testing method is how the average test score is calculated. The way I calculate the test average is to subtract 5% for every question not mastered. This of course means that you can master no questions and still get a positive exam average, but since this would result in an F under any reasonable grading scale, it really doesn’t matter. However, this does make it somewhat difficult for students to understand how to calculate their grade as the semester progresses. To help with this, I provide the following table on my syllabus:

mbt-table

This helps the students get an idea at the beginning of the semester of how many questions they need to master to obtain the grade they want. More importantly, it helps them determine their grade during the semester. This table also allows me to easily emphasize the importance of doing homework, as the students can easily see that doing poorly on homework requires them to do very well on exams to compensate.

Hopefully these posts have given a good broad overview of how a mastery-based class could be run. As I mentioned above, this is only one way in which the class could be set up. There are really as many implementations of this method as there are instructors using it, but these basic guidelines should give a good starting place for those of you thinking about trying out mastery-based testing.

 

Nuts and bolts, part 1

This post is aimed at educators who are are considering taking the leap into mastery/standards/specifications-based assessment, but aren’t sure where to begin. My biggest question when I first heard about mastery-based testing (MBT) was: No points(!) — how does that work?

This post is part 1 of the ‘Nuts and bolts’ instructions for how to do points-free assessment with MBT. Part 1 deals with the logistics before the course begins — how should you write the syllabus and explain MBT to your students? Part 2 explains the specifics of writing exams and grading that you will be dealing with once the course starts.

Taking a mastery-based testing (MBT) approach to a course requires some advanced planning, but it is the type of planning that not only helps with the assessment but also with the long-term goals of the course. If you are nervous about trying a new assessment method like MBT or specification grading, you might consider starting it in a course you’ve taught before. I did this during the first year of my current job — I taught Calc 1 in the fall with a normal points system (100 point exams, 10 point quizzes, etc). The following semester I taught a section of the same course, but used an MBT approach to assessment.

Feeling comfortable with the content is important for laying the groundwork of mastery-based assessment because you’ll need a good idea of what it is the students should master in the course before you begin. You can think of it as writing the final exam before you write your syllabus. If you are the type of person who has already written your final for the course you’ll teach next semester — then congratulations, this will be easy for you! If instead you are like me and sometimes write the final exam during finals week, then I’m here to convince you that it’s still (mostly) easy to use MBT.

The first step is to create a list of the essential skills and topics that the course will cover. If you haven’t taught the course before, creating a list of learning outcomes is one of those recommended teaching techniques that (to be honest) I didn’t hear about until I had taught several courses on my own. [See, for example: Writing Student Learning Outcomes] In short, it is a good idea, even if you aren’t using MBT next semester. Your list might consist of both general skills like “students will engage in problem solving” and also subject-specific techniques like “students will be able to take the derivative of a composition of functions using the chain rule”. How you group the learning goals for assessment depends on the course and your own preference. For simplicity, you may initially try to limit yourself to 16 umbrella topics, and students can try to master each topic. For examples of lists like this, see the resources page.

I’ll talk more about choosing the list of skills, but now it’s time to write the syllabus!

Syllabus explanation of MBT

Your syllabus will look mostly the same as normal, except it should have some explanation of the mastery-testing method. Whether you put the list of topics on the syllabus is up to you — I did not put the list of topics the first time I used MBT because I was still figuring things out when the semester started. Now, however, I do try to put the list of topics on the syllabus because it helps to have the list easily accessible to students.

In the syllabus section where you would normally explain how many midterms there will be and whether the final will be cumulative, you will now have to explain how MBT works. I borrowed most of this directly from Austin’s syllabus, but this is just a starting point. In my class, I call the mid-terms “Quests”. The final exam is really just another Quest. Here is the MBT statement from my syllabus for Calc II.

Grading Policy

Grades in this course will be determined by an assessment system that relies on mastery of 16 sorts of problems. For each type of problem you will have multiple attempts to demonstrate mastery. The examinations (Quests and the final exam) will all be cumulative. The first Quest will have 5 problems, the second will have 5+4=9 problems (with 5 being variants of the ones occurring on the first quest), the third will have 9+4=13, and the fourth quest and the final exam will both have 16 problems. There may also be re-attempts periodically to allow for further attempts at each type of problem.

I record how well you do on each problem (an M for master level, an I for intermediate level, and an A for apprentice level) on each quest. After the final, I use the highest level of performance you achieved on each sort of problem and use this to determine your course grade.

If at some point during the semester you have displayed mastery of each of the 16 sorts of problems, then you are guaranteed at least a B+ (homework and Maple proficiency will determine higher grades). The grade of C can be earned by demonstrating mastery of at least 12 of the types of questions. If you master at least 8 of the types of problems you will earn a D-. A more detailed grading table is given below.

(See Jeb’s post on Nuts and Bolts: Part 2 for an example of a grading table.)

This method of arriving at a course grade is unusual. There are several advantages. Each person will have several chances to display mastery of almost all of the problems. Once you have displayed mastery of a problem, there is no need to do problems like it on later exams. It is possible that if you do well on Quests you may only have one or two types of problems to do on the final exam. It is also possible that a few students will not even have to take the final exam.

This method stresses working out the problems in a completely correct way, since accumulating a bunch of Intermediate-level performances does not count for much. It pays to do one problem carefully and completely correct as opposed to getting four problems partially correct. Finally, this method allows you to easily see which parts of the course you are understanding and which need more attention.

If at any point during the semester you are uncertain about how you are doing in the class, I would be very happy to clarify.

[Aside: Another MBT enthusiast uses something like Padawan/Jedi/Knight instead of Apprentice/Intermediate/Master. Come up with your own names, you can.  Yes, hmmm.]

Writing a statement about MBT in the syllabus is a first step towards getting student buy-in. It is equally important  to have a prepared summary you can give on the first day or during the first week about what MBT assessment is and why you use it.

Selling the idea to students/first day comments

On the first day of class, I spend about 10 minutes explaining how my version of no-points assessment works, and why I choose to determine course grades in this way. My biggest selling point to students is that I want to give them more than one try to demonstrate that they understand the course material. It is helpful to emphasize that while the standard for mastery is high, every single person in the class has the chance to succeed with MBT.

Most students have not encountered a mastery-grading system before, but they are usually excited by the idea that they may not have to show up for the final exam if they master all of the topics during the mid-term assessments. (That is, if you choose to handle your course in this way — there are other models that require a cumulative final exam regardless of mastery levels during the semester.)

When explaining the system to my students, I use the syllabus explanation as a guide,  and I add in my own personal reasons for approaching assessment in this way — here are a few.

  • I find points arbitrary — what does it really mean to get a 7/10 on a quiz? It is also difficult keeping a “7/10” consistent throughout the semester, which is frustrating to students.
  • I think that real learning requires revisiting your previous work and addressing misconceptions. To be successful in an MBT course, you have to address (at least some of) your past mistakes.
  • Completing a math problem in its entirety is an important skill — it requires persistence and focus. Removing the notion of partial credit emphasizes this skill.
  • Giving students the list of topics and skills at the start of the semester allows them to see a path to success in the course. It also lays out exactly what the course goals are. This can help to remove some of the mystery surrounding the class and it also may help with math anxiety.

When presenting the MBT system in the first week, I try to be relentlessly positive. Usually it goes over well. Some students may wish to leave the class because it is different from what they are used to, and that’s ok too! On the whole, enrollment in my classes has been steady since I’ve started to use mastery grading.

Incorporating other graded components in the course 

It is easy to incorporate other components of the course that are not appropriate for an in-class assessment — for example, practice homework problems, projects, oral presentations, Maple projects, and so on. You can choose to grade these other components in the mastery style (M/I/A or K/J/P) or you can revert to a standard points system for these components. I’ll demonstrate with an example of how I incorporated homework into Calc I and II.

I used WebAssign (an online homework system) for daily homework in Calculus I and II. I gave students repeated attempts on the homework, just as I did on exams. In the case of homework, 6 attempts was the cutoff. In the final course grade, homework counted the same as two of the “mastery topics” — if a student got at least 70% of the homework problems correct (eventually), I treated it as one mastery. If they got at least 90% of the homework problems correct, I treated it as two mastery. Therefore, my 16 “topics” grew to 18 in the final course grade, and two of them aren’t really topics at all, they are dependant on homework. There are other ways to incorporate homework or presentations — see the resources page for more examples.

Deciding on the list of skills to master

I recommend drafting a final exam that you would give in a standard version of the course, and determining the skills list based on that exam. It’s a great way to see what you are hoping that students accomplish. Some problems may require multiple different skills, so those sorts of problem may become two different topics. Others might not merit an entire topic on their own, but they might be easily grouped with another type of problem. For example, I had a topic in Calc II called “Applications of Integration to Arc Length, Surface Area, or Probability”. That topic usually had a question with two unrelated parts — but both parts used integration to solve a problem.

It’s a process of trial and error, for sure! I have edited my list of topics from past classes, after discovering that some topics shouldn’t be given equal weight in the final course grade.

Jeb’s post continues with an in-depth view of choosing exam problems, dealing with quizzes, and grading.