HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
April 1, 2020
Vol. 77
No. 7

A Better Way to Assess Discussions

author avatar
When teachers give a group grade for discussions, students focus on collaboration—not "airtime."

premium resources logo

Premium Resource

AssessmentInstructional Strategies
A Better Way to Assess Discussions thumbnail
My teaching career took a pretty dramatic turn in 2006 when I was hired to teach at a Harkness school, The Masters School, just outside New York City. Harkness schools are designed on the premise that learning should happen primarily through Socratic discussion. That means that whether it's Algebra II or American History, student-led discussion and problem solving are the core learning mode in every classroom. For me, an experienced English teacher at the time, it wasn't actually a huge shift to have a discussion-based classroom—I already conducted my high school English classes that way—but what was truly different was how The Masters School assessed those discussions.
A couple years before I had arrived, the English department had collectively developed a rubric (see my adaptation on p. 36) to assess student participation during class discussion. The rubric lists a series of behaviors and skills that students are supposed to aim to achieve, such as citing the text to support their ideas; not interrupting one another; digging deep to ask and answer important, insightful questions; and engaging in a balanced discussion in which shy students are encouraged to speak up, and talkative, dominant students are encouraged to listen. At the top of the rubric is printed: "Because this is a team effort, there will be a team grade. The whole class will get the SAME grade."
This was new to me. A group grade? For discussion? How does that work? But I quickly saw how it worked—well. Extremely well.

Nothing to "Dread"

At the beginning of the year, when I introduced the rubric and the group grade, there was a clear understanding among my students that we were a team working collaboratively toward a common goal: a great discussion. Like a soccer team, we were working in concert with one another, no one "player" hogging the proverbial ball and sucking up all the action. Prior to using this rubric at The Masters School, I had always counted class discussion for about 10 percent of a student's overall grade, and that number was based on the student's volume of participation. Using the collaborative rubric, though, made all the flaws of that system apparent: First, why was I rewarding volume in participation? Alice could be the most prolific speaker but say really off-topic or superficial things that don't actually help the discussion or result in anyone else learning from her contributions. So volume is an odd thing to reward if what we want is a quality discussion. Second, why was I rewarding bad behavior? Malik might have great insights, but when he cuts others off and responds to every single peer's comment as if the discussion were all about his opinions, he's putting on a one-man-intellectual show. While the quality of his ideas might be good, his discussion and interpersonal skills are lacking. Why give him an A for participation if he is not a model participant?
With the flaws in my previous approach to assessing discussion evident, I began to embrace the core truth behind the group grade. It was decidedly not the "dreaded group grade," the one in which four people are assigned to work on a project together and one person winds up doing the bulk of the work, one person does almost none of the work, and all four end up with the same grade. That is unfair and masks the performance and skill of each individual's learning.
No. In this discussion-based setting, we were not assessing individual performance or understanding of the text; that can be done through traditional quizzes, papers, and projects. In this case, we were using collaborative assessment in discussion to assess the quality and effectiveness of the collaboration itself.

Assessing What Matters

This is an idea that I think is largely overlooked in education. Today's jobs nearly all demand effective collaboration. The World Economic Forum's 2016 Future of Jobs Report lists the top 10 skills of 2020 for all workers, and numbers 4, 5, and 6, respectively are "people management," "coordinating with others," and "emotional intelligence." We think we teach these skills in K–12 classrooms, but are students really learning them? Isn't there a big difference between asking students to work together collaboratively and assuming they will learn those skills and actually assessing the quality of that collaboration and giving students specific feedback on both their individual and collective participation? How else can students learn and get better as collaborators?

Figure 1. Graded Spider Web Discussion Rubric

A Better Way to Assess Discussions - table

Because this is a team effort, there will be a team grade. The whole class will get the SAME grade. The following list indicates what you need to accomplish, as a class, to earn an A. Have a truly hard-working, analytical discussion that includes these factors.
1. Everyone has participated in a meaningful and substantive way and, more or less, equally.
2. A sense of balance and order prevails; focus is on one speaker at a time and one idea at a time. The discussion is lively, and the pace is right (not hyper or boring).
3. The discussion builds and there is an attempt to resolve questions and issues before moving on to new ones. Big ideas and deep insights are not brushed over or missed.
4. Comments are not lost, the loud or verbose students do not dominate, the shy and quiet students are encouraged.
5. Students listen carefully and respectfully to one another. For example, no one talks, daydreams, rustles papers, makes faces, or uses phones or laptops when someone else is speaking because this communicates disrespect and undermines the discussion as a whole. Also, no one gives sarcastic or glib comments.
6. Everyone is clearly understood. Any comments that are not heard or understood are urged to be repeated.
7. Students take risks and dig for deep meaning, new insights.
8. Students back up what they say with examples and quotations regularly throughout the discussion. Dialectical Journals and/or the text are read from out loud OFTEN to support arguments.
9. Literary features/writing style and class vocabulary are given special attention and mention. There is at least one literary feature AND one new vocabulary word used correctly in each discussion.
The class earns an A by doing all these items at an impressively high level. (Rare and difficult!) The class earns a B by doing most things on this list. (A pretty good discussion.) The class earns a C for doing half or slightly more than half of what's on this list. The class earns a D by doing less than half of what's on the list. The class earns an F if the discussion is a real mess or a complete dud and virtually nothing on this list is accomplished or genuinely attempted. Unprepared or unwilling students will bring the group down as a whole. Please remember this as you read and take notes on the assignment and prepare for class discussion.
Source: Wiggins, A. (2017). The best class you never taught: How spider web discussion can turn students into learning leaders. Alexandria, VA: ASCD.
In my traditional approach to grading participation, both Alice and Malik would have wound up with A-range grades for their frequent contributions; but Zoe, who might be more reticent but skilled at asking excellent questions that consistently move discussions to deeper, more insightful ground, might get a B- because there wasn't space for her voice in the discussions. So why not assess what really matters, backwards from the end goals of what we want to see in our discussions? If we want to see strong "people management," "coordinating with others," and "emotional intelligence," then we need to create rubrics and assessment strategies that help students develop those skills. A group grade for discussion does this.
I have heard this many times from the students themselves. When I worked as an instructional coach (about a decade after The Masters School) and would observe classes using Socratic discussion, I could see the difference in quality in discussions that were individually graded versus group graded. The individually graded classrooms often had students vying for airtime, not really listening or building on what others were saying; they were desperate to get out their two or three good ideas so that the teacher could check them off as having done their job. The students would later confess to me that they weren't really listening once they had said their share; they tuned out because they knew they had done what they needed for a good grade. In contrast, classrooms that had built in a culture of a group grade seemed to run themselves. There was a more relaxed atmosphere, one in which everyone was on the same team, trying to help one another understand the topic or text more deeply. A true culture of collaboration feels warm and expansive, not cold and frantic, as when students compete with one another for airtime.

Into the Spider Verse

Over the years, I've honed this kind of discussion into something I call Spider Web Discussion—a web of balanced, high-level, collaborative inquiry—and I still feel it is the best practice I have adopted in nearly 20 years of teaching. This method is similar to the method of the Harkness discussion I learned from The Masters School, but I further developed it to include a thorough debriefing process, a system of charting and coding (so that I have individual data on each student for comment-writing and conferences), and a philosophy that every single voice matters and should be heard equally. I liken the practice to a real spider's web: if you remove just a few strands, the whole thing collapses. So, too, does a Spider Web Discussion.
To create a strong web, each member must pull her own weight, either by speaking up more, listening more, asking great questions, or referring the group back to the text consistently. The truth is, it doesn't matter what you call it or exactly how you do it as long as you implement clear, easy-to-assess rubrics for collaborative discussion and offer clear parameters for group behavior.

A "Symbolic" Grade

I do want to be clear that I now recommend using a formative "symbolic" group grade for all discussions so that report cards and individual grades are not unfairly weighted higher or lower by a collaborative practice, obscuring students' individual performance. Under the Harkness approach, the grade does count. But as I taught in other school systems, including in standards-based schools, I began to question how fair and accurate group grades are. So, I transitioned to a formative approach: I use the same rubric and process as before—students debrief and decide on a fair and accurate group grade—but the grade doesn't move the students' averages. In my gradebook, I enter all the discussion grades, weighted 0 percent; this way, we have a record of our group's progress.
Many people question whether the grade works as effectively when it "doesn't count," and I worried about this myself, too. But I found, to my surprise, that the formative nature of the grading didn't affect the quality of the discussions my students had. Perhaps students are so affected by the psychology of grades that they respond to these "carrots and sticks" whether or not they count; or perhaps students just like to discuss deeply and improve with consistent feedback and self-assessment. I tend to believe the latter.

Apps for That

In recent years, I've learned about—and personally piloted—some excellent tech tools to help enhance the feedback process for students in any type of discussion. I highly recommend Equity Maps, an app available for iPads that offers a digital way to both map and voice-record discussions. One of the most interesting features of Equity Maps is that it makes students aware of equity—how much talk time an individual has had versus others, or even groups of students versus other groups of students. For example, at the click of a button, you can see a graphic depicting how much talk time boys had compared to girls.
One of the newer tech tools I've discovered is a web-based app called Parlay Ideas, which helps engage students who process a little more slowly or are introverted. The app has two modes that teachers can use: a "live" feature which allows the teacher to track and help manage how many times each person speaks during a live discussion, and a "roundtable" online discussion board. For example, anonymity is automatically built into the roundtable discussion mode; students are given wonderful new pseudonyms each time, such as Marie Curie and Ernest Hemingway. However, the teacher can click the anonymous feature off so he can identify students, while they remain anonymous to one another. This eliminates the pitfalls of online anonymity while ensuring that students are really reading and responding to great ideas rather than just to their friends. My students say they feel comfortable posting in this kind of environment, and they are much less afraid of offering up new or unique ideas for fear of being judged by peers as "stupid" or "wrong." This presents a rare opportunity for "drama-free" peer feedback.
Additionally, Parlay offers helpful assessment tools in the roundtable mode: Teachers can leave feedback on a student's comment that is either visible to all or only to that student; and can award points based on how effective students are in areas like "evidence," "communication," and "collaboration." This gives teachers a pragmatic way to provide individual feedback to each student during an online discussion, both on content and process, and opens up another way to assess collaborative inquiry.
It's key to note that I do not assign group grades for roundtables, since we aren't assessing the skills of listening, leading, and engagement the same way we are in a live discussion. In a live discussion, in accordance with the rubric mentioned above, we are looking for specific leadership and collaborative behavior in real time.

Next Generation Assessment

Changing how we assess student discussion shouldn't be something we fear or avoid. Our students need and want feedback from us on how effectively they participate, collaborate, question, and listen; but we're doing our future leaders a disservice if we don't adapt our discussion assessment methods for the next generation of learners. Outdated modes of grading discussion to reward volume over quality or pit students against one another in what is supposed to be an exercise in working together need to be replaced by more nuanced, effective ways for our learners to grow.
After all, if we aim to produce graduates that have strong "people management," "coordinating with others," and "emotional intelligence" skills, then we have to assess those skills. A formative group grade for student-led collaborative inquiry is an excellent way to do just that.

Reflect & Discuss

➛ How do you currently assess your class discussions? Do your methods tend to reward volume over quality?

➛ Could a group grade for class discussions better serve your students? If so, would the grade count?

➛ How might you use Wiggins's rubric to facilitate deeper academic conversations?

End Notes

1 World Economic Forum. (2016). The 10 skills you need to thrive in the fourth industrial revolution. [Blog post].

2 Wiggins, A. (2017). The best class you never taught: How spider web discussion can turn students into learning leaders. Alexandria, VA: ASCD.

Alexis Wiggins has worked as a high-school English teacher, instructional coach, and consultant for curriculum and assessment. Her book, The Best Class You Never Taught: How Spider Web Discussion Can Turn Students into Learning Leaders (ASCD), helps transform classrooms through collaborative inquiry. Alexis is currently the Curriculum Coordinator at The John Cooper School in The Woodlands, TX.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Related Articles
View all
undefined
Assessment
The Unwinnable Battle Over Minimum Grades
Thomas R. Guskey & Douglas Fisher et al.
1 week ago

undefined
A Protocol for Teaching Up in Daily Instruction
Kristina Doubet
1 week ago

undefined
Checking for Understanding
Douglas Fisher & Nancy Frey
1 month ago

undefined
The Fundamentals of Formative Assessment
Paul Emerich France
1 month ago

undefined
The Value of Descriptive, Multi-Level Rubrics
Jay McTighe & Susan M. Brookhart et al.
7 months ago
Related Articles
The Unwinnable Battle Over Minimum Grades
Thomas R. Guskey & Douglas Fisher et al.
1 week ago

A Protocol for Teaching Up in Daily Instruction
Kristina Doubet
1 week ago

Checking for Understanding
Douglas Fisher & Nancy Frey
1 month ago

The Fundamentals of Formative Assessment
Paul Emerich France
1 month ago

The Value of Descriptive, Multi-Level Rubrics
Jay McTighe & Susan M. Brookhart et al.
7 months ago
From our issue
Product cover image 120042b.jpg
Deeper Discussions
Go To Publication