At the recent ASCD assessment conference, a school principal went to the microphone to say that he had just heard distressing news. His son, who usually did well in school, had scored too low on a standardized test to qualify for the district's program for gifted students. Although he admitted to doubts about such special classes, the principal was troubled by the prospect of his son being separated from his friends and intellectual peers. Grant Wiggins, a conference presenter, responded emphatically that a single test score should never be used to make such a decision.
That is especially true when other pertinent information is available. Tests are only an indicator of performance, so it doesn't make sense to give them more weight than performance itself. That would be like a doctor paying more attention to a temperature reading than to the patient's actual physical health.
The point is easily obscured because we often speak of test grades as “performance.” When the authors in this issue use the term, they mean authentic performance: the ability to do things that are valued in the adult world. Some school subjects music, physical education, speech are naturally performance-oriented. But for others English, history, mathematics, science teachers and curriculum developers find it difficult to specify what students should be able to demonstrate.
For observable performances, such as oral reports, there is no mystery about how to assess it: Have students do it. (There may be certain logistical problems, but the conceptual aspects are relatively straightforward.) In many cases, though, we want students to understand abstract concepts and principles. Teachers customarily test for understanding by asking questions and evaluating student responses but answering test questions is different from using information to accomplish real purposes. The challenge is to devise tasks that will make understanding evident. But what is understanding, and how can you tell whether someone understands? Grant Wiggins (p. 18) now puts that question at the center of his work.
A first step can be to have students explain how they solved a problem and why their answer is correct. Carol Parke and Suzanne Lane (p. 26) give examples from the QUASAR mathematics project showing that students' explanations sometimes reveal misconceptions even when the answer is right. In other assessments, students demonstrate their learning in several different ways. Science teacher Paul Egeland (p. 41) explains why he and his colleagues in the St. Charles, Illinois, schools decided to have four different assessments for each instructional unit.
When teachers think in performance terms, they can't wait until the end of a unit to devise their assessment—and they can't expect students to perform if they haven't learned how. Jay McTighe (p. 6), director of the Maryland Assessment Consortium, says that performance assessment, developed initially as an alternative to overuse of standardized testing, leads to greater emphasis on performance in the classroom. McTighe's guidelines for performance-based assessment include setting clear performance targets as the basis for curriculum and instruction and publicizing criteria and performance standards. Some school districts have reconceptualized their entire program this way. Superintendent David Hartenbach and his colleagues (p. 51) in Aurora, Colorado, report that Aurora has made performance-based education the centerpiece of a plan that will culminate with the graduating class of 2001.
All this requires lots of extra work for educators. And at a time when members of the public want better schools but aren't sure what “better” is, it also requires courage. So why do it? In the short term, performance assessments yield useful information about what students can do. In the long run, we want students to be good citizens, productive workers, and all the other things our official goals proclaim. That means we need to go beyond test scores to assess performance.