Cranking MBT up to 11 with Specifications Grading (Part III)

Hello! I’m not Mike!

 

My name is Derek Thompson, and I’m an Assistant Professor at Taylor University in Upland, IN. Like the others on this blog, I’ve seen the transformative power of mastery-based testing. And like Mike, I’ve seen how powerful it can be when combined with specifications grading. What this post will be about is a huge mistake I nearly made – here’s what not to do with specifications grading and mastery-based testing.

 

One of the largest perks of mastery-based testing is getting the buy-in from students on day one. The idea that they can retake exams, and that they can even skip the final if they do well, is alluring and comforting. Specifications grading takes this a step further. My syllabi were carefully designed last semester to adjust one aspect of MBT and to amplify another.

First, weekly quizzes (“quests”) were the first crack at a topic, the two midterms were the second, and the final was the third. This meant that unlike the traditional MBT model, each topic was given three attempts rather than favoring earlier topics with more attempts. Second, if you look at the specifications outline below from last Spring’s Advanced Calculus course, you can see that grades can only go up. Mixing MBT with traditional weighted percentages for other aspects of the course kept me from making this claim previously.

specs1

There were a number of growing pains with this method in my first run in the Spring, but overall I think it was a success. The main problem was that the topics were too easy, and in that course no one even needed to take the final. The university, of course, wants us to have a final on record in case grades are contested, and likewise midterms are somewhat needed to indicate poor grades at midterm to coaches, advisors, etc.

With specifications grading, it’s easy to simply add more specifications, although you run the risk of making the syllabus too complex. When I presented the ideas at Mathfest 2016, I presented this “solved” list of specs for my Discrete Math for Computer Science course.

specs2

Do you see the problem? While it makes sense that the final can be its own grade (and you don’t see this on the chart, but it  would still count for a retake of each core objective), one of my main benefits of specs grading is now a lie. Students can do all the work and tank the final, and they would receive an F in the course by this model. This generated some awkward looks in the audience, and I thank God that I gave a poorly thought-out presentation before I gave my students a poorly thought-out syllabus.

I said something in that talk that I think is succinct and profound, and illustrates why specs grading is the natural evolution of MBT. Last Fall, before I did specs grading, my students were already doing mastery proofs and redoing WebAssign problems in addition to MBT. The whole course was mastery-based – except for my syllabus. That’s important, and the syllabus I presented failed to keep that aspect from last Spring.

With that plan properly in the garbage, I’m still presented with the problem of needing students to take the midterm and the final. I’ve got a bad habit of moving by miles when I should only go inches if I’ve already got a good framework. Keeping myself in check, my solution for this semester is to keep the weekly quests but have some new topics on the two midterms, just like there would be in traditional MBT. These topics can be attempted again on the final. Here’s the corrected Discrete syllabus:

specs3

They can still opt out of the final if they’ve done well enough, but since it will only be the second attempt for the midterm topics, it’s not as likely. It guarantees a large exam grade that I can use for midterm reports. While it doesn’t allow for equal attempts at each topic, it allows me to categorize them – the heavy stuff will be on the weekly quests when it’s fresh and retested twice; the midterm topics will be lighter reminders of old material that only gets retested on the final. Of course, students will have a few “coins” (I call them “power-ups”) to do specific retakes in my office, if they need to. It’s also a nice way to hand out “extra credit” by allowing retakes for doing extra work – they’ll still need to redo the original material correctly.

You’ll also notice that I have twice as many core objectives as I had Quests for Advanced Calculus – breaking the objectives into smaller chunks will help relieve pressure and also make it more likely that they’ll have a few odds and ends to show up for on the final. It’s also consistent with the ACM standards for discrete math, which include about 40 learning objectives. Having some new ones as midterm topics helps space out this large number as well.

And you might also notice that there’s still one way grades can go down – unexcused absences. This is consistent with official university policy, but now it’s neatly wrapped into the specs so that students are reminded of the issue, instead of it being in a random corner of the syllabus and forgotten.

To summarize, I think that one of the most appealing aspects of MBT, and specs grading (done right) is how grades only increase. Students begin with an F, but they can never lose credit for work accomplished. That’s a huge motivator, and encourages growth mindset, grit, and all of those other learning dispositions we’ve been talking about. Feel free to comment below with questions, or send an email to drthompson@taylor.edu.

Advertisements

Cranking MBT up to 11 with Specifications Grading (Part I)

I first implemented MBT in a Calculus I course in the spring of 2015 and realized, as Austin writes, that MBT is “self-evidently better” than traditional, points-and-partial-credit-based testing. I’ve used some form of MBT in every course since then.

Around the same time, a friend retweeted Robert Talbert of Grand Valley State University that made reference to “specifications grading”, a system proposed by Linda Nilson (I would strongly recommend you read her book if you are thinking of implementing specs grading). With my mind already opened to the possibility of alternative grading systems, I took the plunge and read several of his blog posts, then at the Chronicle of Higher Education, and now hosted on his own site. To be clear, most of what I have learned about implementing specs grading in a math course, I have learned from reading him. But in this post and the next, I’ll do my best to explain how the system works, and how naturally MBT fits into such a course.

Continue reading “Cranking MBT up to 11 with Specifications Grading (Part I)”

Nuts and bolts, part 2

This post was written by Jeb Collins

Welcome back to the nuts and bolts of mastery-based testing. This post is part 2 about the logistics of actually implementing this assessment method in the classroom. If you haven’t read part 1, Katie does a great job explaining about how to prepare before the semester and what happens during the first week. In this post I will be talking about what happens once the semester starts: how to create the multiple tests and quizzes that are written throughout the semester, how to grade those tests and quizzes, and how the final grade is calculated. I’ll be writing about my experiences and choices in how this is implemented, with the huge caveat that there are many ways of implementing this method, and mine is simply one of them. I’m sure you’ll see many others in future posts.

Choosing Exam Problems

Once the mastery topics are chosen, then the individual problems must be chosen to appropriately test those topics. There are a couple of things to keep in mind when choosing the problems. First, one of the main differences in exams from a points-based approach is that you need multiple versions of each question. Therefore, you want to choose questions that can be easily modified. I have found that such questions tend to come in two forms. The first is something like a differentiation or integration question, where the question can be changed completely by a small change such as x^2 \to \sin(x). It is generally very easy to come up with multiple versions of those questions. The other form are word problems, such as a related rates. In order to qualitatively change the question, a good deal of work is needed to come up with a new problem. For those questions I like to find two or three such questions that are qualitatively different, and then create different versions from those two or three by simply changing numbers.

I also choose slightly harder problems for my mastery-based tests. I have a couple of reasons for this. First, I don’t give a cumulative final, so once a student has mastered a topic they are never tested on it again. So I want to be sure that students who obtain mastery on a topic have truly mastered the material. Also, each test has only four new problems on it, and therefore is slightly shorter than a typical points-based test.

Finally, something to consider when writing questions is their length. Mastery questions will often have multiple parts to them, since most topics cannot be tested by one problem. However, this can lead to trouble if the question is too long, since failure on one part can lead to a loss of mastery on the whole topic. I have one example from Calculus 2 where I wrote an integration question that included both trig substitution and partial fractions. I noticed that often students would get one part right and another wrong, and therefore wouldn’t get mastery on the topic. The next semester I decided to break both of them up into two separate topics, and it worked much better. I had more total mastery topics, but the students were better able to demonstrate their ability on the different integration techniques.

Writing Exams

Something I didn’t realize when I started using master-based tested was that writing the tests can be considerably different from a points-based method. In a points-based method, each test written is the same length and covers distinct topics. In mastery-based testing, each test is longer than the last, covering new material in addition to different versions of problems from the previous tests. In a points-based method, the quizzes are uniform; that is, each student takes the same quiz. In mastery-based testing, I may have to create as many as 18 different quizzes for one day, as each student may wish to attempt a different problem. All of this adds up to needing a different method to create these tests and quizzes in an efficient manner.

What I like to do is create a library of problems for each mastery topic. I will create a different file for each topic and store all the versions of my problem for that topic in the file. I usually create about 3-10 problems for each topic, creating less versions for the later topics since they will be used less often. This is a good thing to do during the summer or winter break, when I have some extra free time. With this library in hand, writing the exams or quizzes becomes a matter of copy and paste, and can actually be done very quickly.

Quizzes

In mastery-based testing, quizzes are an opportunity for students to obtain mastery on a single topic between the exams. Since failure is not only expected on the exams but is considered beneficial for the students learning, I want to give my students as many opportunities as possible to demonstrate to me they have learned from their failure on previous tests. As the semester goes on, the exams get longer and more stressful for the students, and quizzes are an opportunity for students to demonstrate mastery on a single question. This obviously increases their grade, but it also leaves fewer questions for them to attempt on the next test, which reduces stress. For these reasons I actually find students asking for more quizzes than I have time to give.

When I give quizzes, I let each student choose which mastery questions they would like to attempt on the quiz. This only makes sense since each student will have mastered different topics on the previous exams. For conservation reasons, I don’t print out quizzes with all the questions on them. I have the students email me by the day before the test to let me know which problem they want to attempt. This is usually the biggest problem because students forget about the email, and are then unable to take the quiz. So I remind them often for about a week before the quiz to send me a quick, one-line email, and I also print out extra quizzes . Mostly, by the time the second quiz happens, all the students who want to take the quiz remember to email me. I always give the quiz at the end of the class period. This is because there are some people who will not take it, either because they have mastered all the topics up to that point, or because they are simply not prepared by the time the quiz comes around. I am fine with this, and just let the students leave early.

Grade Distribution

I use the following grade distribution in my mastery-based classes:

  • Tests: 80%
  • Homework: 20%

I choose to have the majority of the student’s grade decided by the mastery score. My main reason for doing this is because I design the exam questions such that if they show mastery of them, they understand the material of the course. Another reason is because the mastery exams are where growth mindset is emphasized in the course, I want that mindset to be emphasized for the students.

One aspect of mastery-based testing that I have found to vary widely implementers of this testing method is how the average test score is calculated. The way I calculate the test average is to subtract 5% for every question not mastered. This of course means that you can master no questions and still get a positive exam average, but since this would result in an F under any reasonable grading scale, it really doesn’t matter. However, this does make it somewhat difficult for students to understand how to calculate their grade as the semester progresses. To help with this, I provide the following table on my syllabus:

mbt-table

This helps the students get an idea at the beginning of the semester of how many questions they need to master to obtain the grade they want. More importantly, it helps them determine their grade during the semester. This table also allows me to easily emphasize the importance of doing homework, as the students can easily see that doing poorly on homework requires them to do very well on exams to compensate.

Hopefully these posts have given a good broad overview of how a mastery-based class could be run. As I mentioned above, this is only one way in which the class could be set up. There are really as many implementations of this method as there are instructors using it, but these basic guidelines should give a good starting place for those of you thinking about trying out mastery-based testing.

 

Nuts and bolts, part 1

This post is aimed at educators who are are considering taking the leap into mastery/standards/specifications-based assessment, but aren’t sure where to begin. My biggest question when I first heard about mastery-based testing (MBT) was: No points(!) — how does that work?

This post is part 1 of the ‘Nuts and bolts’ instructions for how to do points-free assessment with MBT. Part 1 deals with the logistics before the course begins — how should you write the syllabus and explain MBT to your students? Part 2 explains the specifics of writing exams and grading that you will be dealing with once the course starts.

Taking a mastery-based testing (MBT) approach to a course requires some advanced planning, but it is the type of planning that not only helps with the assessment but also with the long-term goals of the course. If you are nervous about trying a new assessment method like MBT or specification grading, you might consider starting it in a course you’ve taught before. I did this during the first year of my current job — I taught Calc 1 in the fall with a normal points system (100 point exams, 10 point quizzes, etc). The following semester I taught a section of the same course, but used an MBT approach to assessment.

Feeling comfortable with the content is important for laying the groundwork of mastery-based assessment because you’ll need a good idea of what it is the students should master in the course before you begin. You can think of it as writing the final exam before you write your syllabus. If you are the type of person who has already written your final for the course you’ll teach next semester — then congratulations, this will be easy for you! If instead you are like me and sometimes write the final exam during finals week, then I’m here to convince you that it’s still (mostly) easy to use MBT.

The first step is to create a list of the essential skills and topics that the course will cover. If you haven’t taught the course before, creating a list of learning outcomes is one of those recommended teaching techniques that (to be honest) I didn’t hear about until I had taught several courses on my own. [See, for example: Writing Student Learning Outcomes] In short, it is a good idea, even if you aren’t using MBT next semester. Your list might consist of both general skills like “students will engage in problem solving” and also subject-specific techniques like “students will be able to take the derivative of a composition of functions using the chain rule”. How you group the learning goals for assessment depends on the course and your own preference. For simplicity, you may initially try to limit yourself to 16 umbrella topics, and students can try to master each topic. For examples of lists like this, see the resources page.

I’ll talk more about choosing the list of skills, but now it’s time to write the syllabus!

Syllabus explanation of MBT

Your syllabus will look mostly the same as normal, except it should have some explanation of the mastery-testing method. Whether you put the list of topics on the syllabus is up to you — I did not put the list of topics the first time I used MBT because I was still figuring things out when the semester started. Now, however, I do try to put the list of topics on the syllabus because it helps to have the list easily accessible to students.

In the syllabus section where you would normally explain how many midterms there will be and whether the final will be cumulative, you will now have to explain how MBT works. I borrowed most of this directly from Austin’s syllabus, but this is just a starting point. In my class, I call the mid-terms “Quests”. The final exam is really just another Quest. Here is the MBT statement from my syllabus for Calc II.

Grading Policy

Grades in this course will be determined by an assessment system that relies on mastery of 16 sorts of problems. For each type of problem you will have multiple attempts to demonstrate mastery. The examinations (Quests and the final exam) will all be cumulative. The first Quest will have 5 problems, the second will have 5+4=9 problems (with 5 being variants of the ones occurring on the first quest), the third will have 9+4=13, and the fourth quest and the final exam will both have 16 problems. There may also be re-attempts periodically to allow for further attempts at each type of problem.

I record how well you do on each problem (an M for master level, an I for intermediate level, and an A for apprentice level) on each quest. After the final, I use the highest level of performance you achieved on each sort of problem and use this to determine your course grade.

If at some point during the semester you have displayed mastery of each of the 16 sorts of problems, then you are guaranteed at least a B+ (homework and Maple proficiency will determine higher grades). The grade of C can be earned by demonstrating mastery of at least 12 of the types of questions. If you master at least 8 of the types of problems you will earn a D-. A more detailed grading table is given below.

(See Jeb’s post on Nuts and Bolts: Part 2 for an example of a grading table.)

This method of arriving at a course grade is unusual. There are several advantages. Each person will have several chances to display mastery of almost all of the problems. Once you have displayed mastery of a problem, there is no need to do problems like it on later exams. It is possible that if you do well on Quests you may only have one or two types of problems to do on the final exam. It is also possible that a few students will not even have to take the final exam.

This method stresses working out the problems in a completely correct way, since accumulating a bunch of Intermediate-level performances does not count for much. It pays to do one problem carefully and completely correct as opposed to getting four problems partially correct. Finally, this method allows you to easily see which parts of the course you are understanding and which need more attention.

If at any point during the semester you are uncertain about how you are doing in the class, I would be very happy to clarify.

[Aside: Another MBT enthusiast uses something like Padawan/Jedi/Knight instead of Apprentice/Intermediate/Master. Come up with your own names, you can.  Yes, hmmm.]

Writing a statement about MBT in the syllabus is a first step towards getting student buy-in. It is equally important  to have a prepared summary you can give on the first day or during the first week about what MBT assessment is and why you use it.

Selling the idea to students/first day comments

On the first day of class, I spend about 10 minutes explaining how my version of no-points assessment works, and why I choose to determine course grades in this way. My biggest selling point to students is that I want to give them more than one try to demonstrate that they understand the course material. It is helpful to emphasize that while the standard for mastery is high, every single person in the class has the chance to succeed with MBT.

Most students have not encountered a mastery-grading system before, but they are usually excited by the idea that they may not have to show up for the final exam if they master all of the topics during the mid-term assessments. (That is, if you choose to handle your course in this way — there are other models that require a cumulative final exam regardless of mastery levels during the semester.)

When explaining the system to my students, I use the syllabus explanation as a guide,  and I add in my own personal reasons for approaching assessment in this way — here are a few.

  • I find points arbitrary — what does it really mean to get a 7/10 on a quiz? It is also difficult keeping a “7/10” consistent throughout the semester, which is frustrating to students.
  • I think that real learning requires revisiting your previous work and addressing misconceptions. To be successful in an MBT course, you have to address (at least some of) your past mistakes.
  • Completing a math problem in its entirety is an important skill — it requires persistence and focus. Removing the notion of partial credit emphasizes this skill.
  • Giving students the list of topics and skills at the start of the semester allows them to see a path to success in the course. It also lays out exactly what the course goals are. This can help to remove some of the mystery surrounding the class and it also may help with math anxiety.

When presenting the MBT system in the first week, I try to be relentlessly positive. Usually it goes over well. Some students may wish to leave the class because it is different from what they are used to, and that’s ok too! On the whole, enrollment in my classes has been steady since I’ve started to use mastery grading.

Incorporating other graded components in the course 

It is easy to incorporate other components of the course that are not appropriate for an in-class assessment — for example, practice homework problems, projects, oral presentations, Maple projects, and so on. You can choose to grade these other components in the mastery style (M/I/A or K/J/P) or you can revert to a standard points system for these components. I’ll demonstrate with an example of how I incorporated homework into Calc I and II.

I used WebAssign (an online homework system) for daily homework in Calculus I and II. I gave students repeated attempts on the homework, just as I did on exams. In the case of homework, 6 attempts was the cutoff. In the final course grade, homework counted the same as two of the “mastery topics” — if a student got at least 70% of the homework problems correct (eventually), I treated it as one mastery. If they got at least 90% of the homework problems correct, I treated it as two mastery. Therefore, my 16 “topics” grew to 18 in the final course grade, and two of them aren’t really topics at all, they are dependant on homework. There are other ways to incorporate homework or presentations — see the resources page for more examples.

Deciding on the list of skills to master

I recommend drafting a final exam that you would give in a standard version of the course, and determining the skills list based on that exam. It’s a great way to see what you are hoping that students accomplish. Some problems may require multiple different skills, so those sorts of problem may become two different topics. Others might not merit an entire topic on their own, but they might be easily grouped with another type of problem. For example, I had a topic in Calc II called “Applications of Integration to Arc Length, Surface Area, or Probability”. That topic usually had a question with two unrelated parts — but both parts used integration to solve a problem.

It’s a process of trial and error, for sure! I have edited my list of topics from past classes, after discovering that some topics shouldn’t be given equal weight in the final course grade.

Jeb’s post continues with an in-depth view of choosing exam problems, dealing with quizzes, and grading.