Getting past "grading" homework

A commenter named Carol recently raised a few questions that I felt warranted a more in-depth response.

I am really enjoying diving into this stuff... but I can't wrap my mind around the homework. How is it that you "require" them to do their homework, when it has no impact on their assessment? And, if a student never completes their homework, yet masters the standards - would they receive the same mark as the ones who do complete all the assignments? What becomes the point of having a deadline? Help!
It seems like the question of grading homework comes up quite often when discussing standards-based grading on this blog and in my face-to-face conversations with colleagues, too. 

It's easier than you think to 'require' students to continue to do their homework while not using it to impact their grade.  I conducted the following exercise with my Geometry students last year:
Me: "How many of you would still do the homework if it was worth 2 points rather than 3?"
(Most of the class raised their hand)
Me: "How many of you would still do the homework if it was only worth 1 point?"
(Fewer students raised their hands)
Me: "How many of you would still do the homework if it was worth zero points?" 
(Several students, but clearly a minority, raised their hand)
Me: "If you did not do the homework, how well do you think you would do on the test?"
(A few students chimed in saying they probably wouldn't do very well)
Me: "How many tests do you think you'd have to fail before you realized that you need to do the homework to be successful?"
(Some said one test while others said a few tests)
Me: "Okay, now that you know this about homework and tests...why would you stop doing your homework if it wasn't worth any points?"
(We then had a discussion about how homework is practice, answers are freely available so why NOT do it?! and that it is acts like "insurance.")  
In my standards-based grading system, students can be re-assessed for full credit on any learning target they'd like to improve on, but one pre-requisite for this second chance opportunity is that they put in the time in the first place.  By completing the homework, they have "purchased insurance" which gives them the right to cash in when a crisis hits.  If a student does not complete the homework, but masters the standard, then "insurance" wasn't needed on his/her end - shame on me for not catching this in a pre-test! Full disclosure: this scenario of students not completing their homework and mastering a learning target happens more often than I originally envisioned - I will be piloting a few tweaks to help alleviate this "problem" over the next few weeks.  Stay tuned for the results. 

For additional commentary on these subjects, you may want to consider reading a few of my previous posts:

Avid MeTA musings readers, how do you handle homework?  If you don't grade it, how do you get buy-in from students, parents and administration?  If you are currently grading homework, what's holding you back?

Taking on the naysayers: allowing the new to replace the old

When discussing formative assessment and my current standards-based grading system with colleagues, various aspects of it seem like a lot of "work" (such as inputting multiple scores into the grade book rather than just a single summative assessment score) while other parts are viewed as downright controversial. 

In Revisiting Professional Learning Communities at Work, DuFour, et. al (2008) suggest three components of formative assessment:

  1. The assessment is used to identify students who are experiencing difficulty in their learning. 
  2. A system of intervention is in place to ensure students experiencing difficulty devote additional time to and receive additional support for their learning.
  3. Those students are provided another opportunity to demonstrate their learning and are not penalized for their earlier difficulty. (emphasis mine, pp. 216-217)
Allowing new evidence of learning to replace old evidence is such a hard sell.  I hear responses such as Won't students give a mediocre effort the first time?  We're not teaching responsibility!  This isn't how college works.  I've used this example in a previous post and it's also the one I use to illustrate the point that new evidence of learning should replace old evidence:
Consider the following example. Assume that homework is graded on completion and quizzes/tests on content mastery.

Student A: Homework: 50% Quiz: 60% Test: 100%
Student B: Homework: 100% Quiz 100% Test: 100%

Student A did not understand the concepts and therefore did not complete the homework. Somewhere between the "quiz" and the "test" Student A came in for extra help and finally "understood" the concept which explains his/her sudden improvement on the "test."

In the traditional grading system, which student earns a better grade? Student B, of course. A traditional points system penalizes "later learners." On the "test," both students demonstrated the same level of understanding, but Student A is penalized for initially struggling. Do we have a realistic expectation that students will "get it" the first day we teach concepts to them? If so, then why not have daily tests?
DuFour, et. al go on to explain this point concisely in their book: 
"Our position has been challenged in several ways. Some have argued  students should not be given a second opportunity to learn, or, at the very least, their initial failure should be included in calculating the grade. They claim it would be unfair to allow low-performing students the opportunity to earn a grade similar to those of students who were proficient on the initial assessment. Our response is that every school mission statement we have read asserts the school is committed to helping all students learn. We have yet to find a mission statement that says, “They must all learn fast or the first time we teach it.” If some students must work longer and harder to succeed, but they become proficient, their grade should reflect their ultimate proficiency, not their early difficulty." (p. 219)
I am becoming increasingly convinced that any classroom claiming to involve formative assessment or "assessment for learning" must allow new evidence to replace the old.  It just makes sense.

It's not ALL about standards-based reporting...

A science teacher wandered into my room yesterday after school.  In addition to many hallway and after school conversations with her, I have also passed on several articles related to standards-based grading and formative assessment techniques.  Towards the end of our conversation, she admitted, "I am going to do this."  We walked through her current grading scheme as well as the projects, assessments and assignments she typically uses each semester.  This turned into a brainstorming session on transforming to a new system reporting standards rather than assignments.  One of the hang-ups was reporting responsibility.  She really wants to emphasize and communicate responsibility with her students both on a daily basis (coming to class prepared) and on assignments (turning in an assignment on time), so she is going to create a "responsibility" category and weight in between five and ten percent of the overall grade.  Without this responsibility caveat, standards-based grading just wasn't going to happen for her.

Our conversation has kept me thinking about why standards-based grading makes so much sense. It went something like this:

Teacher: So if I give a student a second chance on a lab report, do I average the grades?

Me: With the new system, a student may not need to redo the entire lab report.  Maybe he/she just needs help with the data analysis section.

Teacher: Okay, so if they redo the data analysis part, do I average those two scores?

Me: You could, but that wouldn't be much different than what you're doing now, right?  What if that student's second draft of the data analysis was the best data analysis you've ever seen?  Shouldn't that student receive the same score as if he/she had done an awesome job the first time? or the next time?  My philosophy is that new evidence of understanding should replace old evidence.

Teacher: That makes sense.  What about a student that turns in a lab report late?  I usually take off points.

Me: That's where your responsibility category comes in.  If that student turned in a really great lab report, then that should be communicated with parents rather than taking off points like you used to do.  This should make conversations with parents and students much easier, right?  A less-than-perfect score is no longer as mysterious.  Was it because it wasn't up to par?  Was it because it was late? A combination of the two?  Explaining that mystery is history!

Teacher: I need some more time to think about this.  Matt, I am going to do this

After the conversation, it hit me. It's more about changing the norms and values of the classrooms in my building and less about a particular grading system.  Are our grading and assessment practices clearly communicating student learning?  Is time the variable or is learning?  Evan Abbey summed up the culture piece much more eloquently than I can in a recent comment he made on my blog:
"...The traditional view is of a teacher as gatekeeper, sorting out students, not letting them to a diploma without the proper amount of effort to make it through the gate.

This does 2 things which are undesirable. First, it sets up teachers to be in an adversarial position against students, which often sets students up to feel that they have to be opposed to learning as well as the teacher. And second, it makes failure a terrible, terrible thing. I would argue that failure actually is a critical ingredient in learning (as Edison would attest).

The gardener approach flips that around, where the teacher is on the same side as the student, helping them attain measurable standards of learning, and letting students gauge their growth themselves. The student drives the data collection and assessment, looking at their learning against those standards and determining 1) how much further they have to go, and 2) how they are going to get there. The teacher encourages and facilitates learning.

The way to be a gardener instead of a gatekeeper is to ditch grading, which is teacher-centric, arbitrary, and set up for comparisons against other students for the purpose of ranking (and rejecting). In its place is to use standards-based reporting, where the standard of learning is objective and measurable, and students are not comparing themselves against anyone else but the standard...."
It's less about standards-based grading and more about creating a "gardener" mentality in our schools.  Standards-based grading is merely a framework that makes allowing new evidence of understanding to replace old evidence of understanding much more fluid.  As excited as I was to see a colleague embrace standards-based grading, I was even more elated to see her realize the need to allow new evidence to replace old evidence of learning.  Had we not talked about a standards-based grading system and its contrast with traditional grading practices, I am not sure if this new realization would have taken place.  Richard Elmore (as quoted in Revisiting PLCs at Work) talks about changes in practice as a means for a larger cultural impact:
"Only a change in practice produces a genuine change in norms and values...grab people by their practice and their hearts and minds will follow" (p. 108)

I learned a very important lesson yesterday.  It's not ALL about standards-based reporting, but more about rethinking the way we view assessment as a tool rather than a hindrance for learning.

Parent control and traditional grading schemes

I had an interesting conversation with a colleague today that followed-up a heated discussion that took place yesterday among several staff members over lunch in the teachers' lounge.  Yesterday's ongoing question was, "Do our grading practices promote compliance or learning?"  For example, when a teacher continually awards five points for merely completing a homework assignment, we're sending a hidden message to parents that the way their student can raise their grade is to make sure that all of their assignments are turned in.  Consider the following fictitious email communication:

Dear Mr. Townsley,
I logged on to PowerSchool last night and saw that Johnny's grade dropped from a C to a D.  Is there any extra credit he can do to raise his grade?

I look forward to hearing back from you today!

Jane Doe
or I'm guessing any secondary teacher can relate to this one:
Dear Mr. Townsley,
Jessica told me that she has been turning in all of her work, but her grade is still an "F."  Could you send me a list of the assignments she is missing so that she can get her grade up?"

Thanks,
John Smith
How often does parent communication emphasize turning in assignments, late work and following directions?   These are all examples of compliance or "doing more work."  Wouldn't it be great if parent communication instead looked something like this?
Dear Mr. Townsley,
Suzie does not seem to have a passing grade right now.  Could you send me a list of the concepts and ideas that she still does not understand so that she and I can work on them together?

Thanks,
Sam Johnson
My colleague had a great "aha" moment as she thought about this conversation last night.  She agreed that traditional grading systems promote compliance and often report out responsibility rather than learning.  She challenged my thinking by suggesting that many parents are happy to ask about compliance and responsibility in relation to grades not only because it is what they were used to in school, but also because it is something they can control.  Parents can quickly and easily ensure that their children are completing assignments.  There is great satisfaction in holding homework completion up as bait for free time watching television or playing video games.  "Once you have your essay written, you may play Halo."  She went on to suggest that many parents might begin to feel helpless once they realized the reason their student was failing Algebra wasn't because he/she wasn't turning in their homework, but instead because he/she is unable to solve two-step linear equations.  Further evidence of this idea plays out when thinking about elementary report cards which tend to be descriptions of skills and abilities rather than letter grades.  These skills are typically less cognitive and presumably a larger portion of our society has mastered them, so parents may be willing to accept a skills-based report card at this age.  Parents are better able to control and teach those abilities at home, so there is more of a sense of ownership.  This sense of control seems to decrease as the content students are expected to know and apply becomes more complex and unfamiliar to the average citizen as students progress through the public school system.  Anatomy and calculus have a stigma attached to them that writing cursive and subtracting integers don't seem to possess.

Here lies a new reality in challenging system-wide change towards a more "standards-based" reporting method in our schools today: traditional grading schemes promote a "compliance" mentality and parents seem to be happy with it because they feel like they have more control.  "Un-schooling" both students and parents is a natural first step, but what does this look like?  As one of only two teachers in my building currently embracing standards-based reporting, I see a steep hill ahead. 
  
Is "parent control" a valid challenge to standards-based reporting?   If so, what can be done at a system-wide level to overcome this challenge?

What a difference context makes...

Yesterday, I observed my student teacher's statistics lesson on confidence intervals using a t-distribution.  We both agreed that the lesson went fairly well and without a doubt emphasized the main ideas needed to help students understand the day's learning objective:

Construct and interpret confidence intervals for a population mean using t-tables when n < 30 and/or the population standard deviation is unknown.
This topic is a common one taught in any introductory statistics class during the inferential statistics half of the syllabus.  In fact, just the school day before, students were exposed to the underlying ideas behind confidence intervals and the difference between descriptive statistics and inferential statistics.  After yesterday's lesson, my student teacher and I both lamented how tired and unresponsive the class as a whole was.  No matter how many jokes he cracked or the stories told about the history and relevance of confidence intervals, the students were very subdued and content with the silence.  Aside from a few strategies we discussed about alternative ways to engage the class, it inevitably seemed like one of those "it will hopefully be better tomorrow" conclusions.


("Library visitor" by umjanedoan on Flickr)


Then, "after school" happened.  A student who is not currently and has never previously enrolled in Statistics strolled into our room.  For the sake of anonymity, we'll call her "Barbara."  Barbara is working on her science fair project and her teacher suggested that she pay a visit to the resident statisticians (my student teacher and me) to critique her data analysis.  (Note: I'm really excited that this type of interdisciplinary collaboration is becoming more common between Dawn and our statistics department - last week our class critiqued her class' data tables, graphs and charts for appropriate use of descriptive statistics)  As it turns out, Barbara was testing the effect of several variables on a person's ability to run.  Her averages, in seconds, were within one or two of each other.  Based on her descriptive statistics toolkit (mean, median, mode and range), it appeared as though there was a meaningful difference between her variables due to the averages being separated by as much as two seconds.  After all, to the average US citizen, a runner who has raced eight times with an average time of 23.5 seconds seems faster than another running who has also raced eight times with an average of 24.4 seconds, right?!

Enter inferential statistics and confidence intervals.

Over the next thirty minutes, my student teacher and I taught Barbara an abbreviated version of the same lesson he had taught earlier than afternoon to our statistics class.  Barbara added standard deviation to her descriptive statistics toolkit and asked questions until she not only understood the idea behind confidence intervals, but also felt like she was able to explain it to someone else, a requirement for the science fair project.  Barbara asked questions like "Would my intervals change if I increased the number of trials?" and "Why should I choose a 95% level of confidence instead of a 99% level?"  She left the room satisfied and ready to add on to the data analysis piece of her project as well add more depth to her "results, discussion and conclusion" component based on her questions. 

I am 95% confident that Barbara left the room with a deeper understanding of inferential statistics than those currently enrolled in the statistics course.  What a difference context makes...