You have written a great test. Now it is time to grade it.
Grade them the same day and return them the next day
Remember back to your own experiences in high school and college. Did you ever have a teacher or professor who returned your tests/papers weeks after you handed them in? How much did you care to spend time reviewing the feedback? Probably not much. Timely feedback is critical for our students’ success. Like it has to be the next day.
I know this is extremely difficult with the every day craziness of being a teacher. But here is my thinking. Let’s suppose I have 3 AP Stats classes with a total of 90 students. I know that grading 90 tests will take me about 5 hours. I have to work these 5 hours no matter what…so why not just do it all in one day so that my students get immediate feedback.
Here is how I manage it. I plan my week around test day. I get all of my other planning and paperwork done so that all of my free time is available on test day to grade. For 90 tests, I need 5 hours of grading time. This often plays out as 1 hour during my prep, 1 hour after school, 2 hours after the kids go to bed, and 1 hour the next morning before school. This is hard, but I am just taking work from later in the week and stuffing it all into one day. When I am finished, the feeling is great. There is nothing worse than spending 3 days with ungraded tests hanging over my head.
I am so committed to this idea, that I have held myself to this standard for all quizzes and tests since I started teaching 16 years ago. I sign a teacher contact in front of students at the beginning of the year so that they can hold me accountable for this. This also shows students that I work hard for them, so that they are willing to work hard for me.
Give descriptive feedback
23/30 = 83 B is feedback. But this is only minimally helpful feedback for students. What did the student do well? What do they need to work on? If we are to treat our tests as a formative experience that contributes to students overall learning, they must receive descriptive feedback.
At first I thought descriptive feedback meant that you had to spend hours writing students multiple paragraphs discussing their strengths and weaknesses. This is not true. It can be much simpler (and quicker) than this. I use 4 types of feedback:
Underlined text: the part of your solution that put you on the rubric.
(-1/2) or (-1): you lost points from the rubric here.
Circled text: this is the word/idea/graph/phrase that you missed that caused you to lose points.
Short comments: “SOCS” or “why” or “explain” or “see Starburst example in notes” or “interpret” or “context”.
All of these provide students with feedback. They need to know which part of their 6-sentence paragraph is the part that earned points on the rubric. They need to know where and why they lost points on the rubric.
Use a rubric
Hopefully, you wrote a rubric before you gave your test. Of course, the rubric might need to be adjusted once you see all of your student responses. Many teachers ask me if they should be using the AP rubrics to assess their own students. The answer is yes, but I can tell you that my students are not at all ready for this at the beginning of the year. It takes several months for me to train them how to write solutions that will earn them full credit on AP rubrics. For this reason, I tend to use my own softer rubrics when grading, then I discuss the rigor of the AP rubric when I go over the test. By the end of the year, my students are in a position to be successful when I am using actual AP rubrics.
I have also heard of some teachers using the AP rubrics and then using a curve to adjust scores and I also see this as being effective.
Correct notation, titles and keys on graphs, showing a formula first, correct use of vocabulary, answering all parts of the question, using context. These are all of the details that distinguish an excellent student from an average one. These details must be addressed from the start. This doesn’t always mean that points are taken off, but there should be feedback for the student (often I put marks/comments on a test that don’t take points off their score). I always tell my students that “the difference between an A and a B on my tests is in the details”.
Keep data on student performance
I typically record the number of A, B, C, D, and E for each test and also an average . When I give the same test next year, I will be able to compare groups. This will help to inform me about the effectiveness of any instructional changes, as well as informing me about differences between groups. I also do an item analysis for the multiple choice questions. This way, I know which questions were difficult as well as the most common incorrect responses.
Use different color pens for each assessment
I have absolutely no research to back this one up, but I swear students learn better when feedback isn’t red.
I am convinced that one of the advantages that my AP Statistics students have over students from other classrooms is that we assess often (12 tests, 30 quizzes, 1 significance test test, 1 midterm, and 1 final exam) AND that these assessments are used as part of the learning process.