Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
March 1, 1994
Vol. 51
No. 6

Making Sure That Assessment Improves Performance

The most important reason to develop performance assessment is that it motivates educators to explore questions at the very heart of the purposes and processes of schooling.

Critics have charged that traditional assessment practices have been isolated from and damaging to instruction, learning, and school practices. How can we make sure that the consequences of performance assessments will be better?
In 1990 the California Assessment Collaborative (CAC) began to look for answers to this question. The state legislature created the Collaborative to study and support the development and implementation of performance assessments designed chiefly by teachers.
In 30 classroom-, school-, and district-based assessment development projects, teachers are inventing a wide range of assessments, which include projects, exhibitions, open-ended questions, and portfolios. Although varied in subject and grade-level emphasis, size, and scope, the projects are all committed to developing assessment strategies that will improve teaching and learning. Project teachers are also guided by California's state curriculum frameworks, which all share a common vision of instruction: every student is to construct content knowledge, learn to solve complex problems, and work collaboratively in groups.
To support the development projects, the Collaborative's staff provides technical assistance, conducts research, and furnishes facilitators and coaches as needed in the local projects. The staff also disseminates information about project experiences to teachers throughout the state.
All Collaborative participants are working toward instructionally sound assessment—that is, rich performance tasks that guide instruction and build the capacity of teachers, students, and schools to improve their work. Assessment can accomplish these ends only when it produces information that influences what students are taught, how they are taught, and what schools do to support learning. A student exhibition is little more than a show if it is performed on the last day of school. Likewise, a portfolio amounts to little more than a work folder if teachers and students have neither the necessary time nor skills to understand what it says about what students know and need to learn.

Four Key Practices

  1. Articulating standards and assessment design. Much of the work of the Collaborative's various pilot projects is devoted to negotiating performance standards and creating fair and meaningful assessment activities. Teachers strive to invent tasks that get at the essential elements of disciplines, present real-world problems, and assess the full range of what students are expected to know and be able to do.Together, teachers debate the meaningfulness of tasks, decide what menu of assignments (if any) should guide the composition of a portfolio, and discuss criteria and processes for interpreting or scoring student performance. Many teachers have commented that their work in the Collaborative is the first time that they have been asked for their views on what students ought to learn or have been given an opportunity to collaborate with colleagues on student performance standards.Typically, project teachers engage in a recursive series of activities. For instance, teachers might invent a portfolio design, test it with students, and then revise the design. As teachers grapple with these challenges, they seldom use the terms “validity” and “reliability,” but in fact they continually struggle with these age-old issues.Our projects provide numerous examples of tasks that, by design, cause teachers to use the best instructional practices as they prepare students for the assessment. Still, performance-based assessment is no panacea. Our experience also reveals that even a performance-based task can perpetuate the teaching of relatively unimportant facts or skills.When the assessment development efforts have prompted instructional benefits, teachers tell us, the improvements have come not only from having new assessment tools, but from having created them. Active participation in the process of debating, building consensus, inventing tasks, and learning to interpret student work was important in effecting the instructional improvements associated with the new assessments (Jamentz 1993).The implication is quite momentous: instructionally sound assessment apparently requires more than the importation of meaningful tasks and standards. The real challenge in assessment reform is structuring an opportunity for educators to work together to reinvent what teachers, students, and schools do.
  2. [[[[[ **** LIST ITEM IGNORED **** ]]]]]
  3. [[[[[ **** LIST ITEM IGNORED **** ]]]]]
  4. Monitoring the consequences of assessments to gauge their impact on teaching and learning. One of the main criticisms leveled against traditional assessments is that they are used to sort students and, on that basis, to deny educational opportunities (Darling-Hammond 1991). The consequence of instructionally sound assessment is quite different—it enhances the opportunity to learn. The assessment data are not used to label students. They simply provide information on what students already do well and pinpoint what they still need to learn. Schools and teachers can use the results of such assessments to determine appropriate learning experiences and to guide the redesign of school programs and structures so that teacher and student performance improves.

The Challenge of Performance Assessment

In practice, performance assessment systems may or may not work this way. The projects in our Collaborative have supplied numerous instances of performance assessment data that were used to improve the coordination of programs for special-needs students with their counterparts in mainstream classrooms. They were also used to make obvious the need for specialized professional development programs. Unfortunately, however, performance assessments were also sometimes used like final exams, with teachers and students showing no inclination to make use of the data the assessment revealed.
  • What do we want students to know and be able to do?
  • How will we know they can do it?
  • What resources must be available to ensure that all students succeed?
  • How do we structure and pace an instructional program that prepares all students to perform well?
  • What should teachers, administrators, parents, and policymakers do to ensure appropriate opportunities for all students?
These central questions must be answered by everyone with direct responsibility for making schools work. They will not be answered by some ad hoc alternative assessment committee or by implanting the latest performance assessments in the same schools that we have today. Site-level performance assessment development is of greatest value when the work is defined, not just as churning out performance assessment tasks, but as inciting all those connected with schools to examine and restructure the work that they do.

Darling-Hammond, L. (November 1991). “The Implications of Testing Policy for Quality and Equality.” Phi Delta Kappan 73, 3: 220–225

Jamentz, C. (1993). Charting the Course Toward Instructionally Sound Assessment: A Report of the California Assessment Collaborative. San Francisco: CAC.

Kate Jamentz has been a contributor to Educational Leadership.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 194047.jpg
The Challenge of Outcome-Based Education
Go To Publication