Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
February 1, 2001
Vol. 58
No. 5

Revamping a Teacher Evaluation System

A Nevada school district took a quantum leap forward when it began to use its teacher evaluation system to energize teachers at all career levels.

Welcome to Nevada—Set your watch back 20 years! That was the slogan on a T-shirt that was sold throughout the state several years ago. As a native Nevadan, I was appalled. But Nevadans were buying and wearing the shirts, too, and everybody laughed when they read the slogan. I had to admit to some truth in the joke, especially as it applied to the field of education.
Until recently, if you had set your watch back 20 years, you would have noticed that the methods for evaluating teachers in the Washoe County School District in Reno, Nevada, had not changed significantly. Sometimes we put a new twist on an old idea and recycled it; other processes had stayed the same because it was easier to leave well enough alone or because we were focusing our attention elsewhere.

The Need for Change

By necessity, we have been working very hard to play catch-up in our state, creating new jobs, bringing in new industries, and revitalizing our schools and universities in response to the third highest growth rate in the nation—a 12 percent population increase is projected over the next five years. Both educators and community stakeholders understand that our state's future depends on our ability to raise the quality of the workforce by graduating more students with high levels of achievement.
The Washoe County School District educates 53,000 K–12 students. We hire about 400 teachers each year and—like all school districts—work hard to recruit and keep the best. That challenge has grown in the last several years, primarily because our in-state universities do not graduate enough teachers each year to meet our needs.
In addition to these challenges, educators were not happy with our evaluation system. Several years ago, the grumbling among both teachers and administrators about our archaic teacher evaluation process got the attention of the district's human resources division. As in many districts, our system was time-intensive for administrators. It did not result in a valuable end product for either teachers or administrators. Teachers complained that evaluation was something that was done to them rather than with their collaboration. Evaluations of veteran teachers performing satisfactorily gave them little new, enlightening, or challenging information. Evaluators paid more attention to beginning teachers or teachers whose performances were deemed unsatisfactory, but often the information was given too late to make any difference for that particular school year or was so imprecise that the teacher was not sure about how to proceed.
To make matters worse, the nine criteria we used for teacher evaluation were described in vague statements that did not make clear what we expected from teachers. The criterion for instructional skills, for example, stated, "Uses a variety of teaching strategies to motivate students to learn and to enable them to participate actively and successfully in the class." Many principals found ways to make the best of a flawed process by spending time with the teachers, setting goals, and collecting data for writing the final evaluation. But even those who were able to give useful feedback to teachers were frustrated by the system's lack of specificity. Evaluation merely met the requirement of the law: to make a value judgment about the teacher's performance that included a narrative portion, to declare the teacher either satisfactory or unsatisfactory, and to meet the state's deadlines.

Good Questions, No Good Answers

The widespread dissatisfaction raised questions and helped us clarify what we wanted from our evaluation process. Could we have a teacher evaluation system that would give veteran teachers more autonomy and encourage them to take on such new challenges as National Board for Professional Teaching Standards (NBPTS) certification? Could it cause more self-monitoring and more self- modification by teachers? Could a teacher evaluation process actually provide motivation for continuous improvement? Would it be possible to give low-performing teachers the kinds of specific feedback and assistance that result in real progress, thus retaining individuals who have the potential to develop into strong educators? Would low-performing teachers who need to be guided toward a different profession receive constructive and specific feedback that builds understanding? Aside from meeting the requirements of the law, why are we doing performance evaluation?
Believing that the evaluation process could have a positive effect on the selection, growth, and retention of teachers, our human resources division created a collaborative teacher evaluation task force with members from our teachers' and administrators' associations. In our district, the evaluation process is not part of the negotiated employment agreement, but we knew the end product would be stronger if the natural work group itself crafted the evaluation instrument.
The task force surveyed the teachers and administrators to identify what was not working in the former system. Members of the group gathered samples of teacher evaluation models from around the state and country and read numerous books and articles on teacher evaluation to determine the best of the best practices. The task force eventually identified several models that other districts were using successfully. Each model stated teacher performance standards that were organized into large categories or domains, and each used rubrics to describe different levels of performance. We took the best ideas from these models and proposed a system that is aligned with teacher performance standards (Danielson, 1996).
Our new system has four domains of teaching: planning and preparation, classroom environment, instruction, and professional responsibilities (Danielson, 1996). Each domain identifies components and more specific elements of teacher behaviors. For each element, a rubric describes the teaching behavior as unsatisfactory, target for growth, proficient, or area of strength. The final version of the model that we adopted at the end of the field-test period includes adaptations of the suggestions of field-test participants. For example, we changed the rubric language to better match our district's goals and needs, and we added a component in the instruction domain that focuses on the ability of the teacher to use assessment data in making instructional decisions.

Field Testing

To test the system, we commenced a two-year field test during which all participants gave feedback for refining the model's content and the process. We invited all principals and assistant principals to test the system with volunteer teachers. Hoping to involve at least 10 principals, we were overwhelmed when 60 of 125 administrators decided to try the system the first year. During the second year of the field test, all administrators were required to use the new system with at least five teachers. In all, 1,795 teachers were evaluated as part of the field test.
The field-tested model differed from the previously used evaluation models in some key areas. First, we added an annual goal-setting session by the teacher and the principal. Although a few principals had already been conducting goal-setting sessions, it was not a widespread practice. Adding this step was somewhat risky, because principals had hoped that a new system would be less time-consuming. We believe that teacher participation in goal setting helps teachers become self-reflective practitioners who can adjust their practices when necessary (Costa & Kallick, 2000). Determining whether the time spent on annual goal setting would be worthwhile became an action research question to be answered by the field test.
Second, we wanted to determine how many of the teacher performance standards could or should be assessed each year. The committee believed that novice teachers needed to be monitored, guided, and assisted more during their first two years and that veterans would benefit from a schedule that focused on selected areas in a cyclical rotation.
Novice or probationary teachers receive three written evaluations during their first year on December 1, February 1, and April 1. The evaluating administrator decides, on the basis of the summative evaluation, if the novice needs a second probationary year. Post-probationary teacher evaluations are scheduled on a three-year cycle. During the first year, a major evaluation focuses on two of the teaching domains. During the second and third years, minor evaluations focus on the remaining two domains. Teachers receive one written evaluation each school year on April 1. Teachers who have been post-probationary for five years can choose self-directed growth options during their minor evaluation years. The appropriateness of the proposed rotation schedule was also a question for the field test.
A third key focus of the testing period was expanding the data collection process. Instead of only coming from classroom observations, data also came from parent conferences, individualized education plan (IEP) meetings, such artifacts as parent communication letters, and districtwide activities that teachers participate in regularly. These activities reflect teachers' authentic activities (Danielson, 1996). We hoped to confirm the importance of recognizing the broad scope of the teacher's work that extends beyond the classroom.
In designing the field-testing process, we began to notice the usefulness of a feedback spiral as a framework for our research. A feedback spiral is a visual representation of steps in a process that follow a recursive, cyclical path. Differing from the familiar feedback loop, the steps in the spiral loop lead to continuous improvement rather than back to the starting point. Arthur Costa and Bena Kallick (1995) suggest that the components of a feedback spiral lead individuals—and the organization—toward self-monitoring, self-modification, and self-renewal. The spiral's components—clarify goals and purposes; plan; take action and experiment; assess and gather evidence; study, reflect, and evaluate; modify actions; and revisit and clarify goals and purposes—provided a road map for our action research of the evaluation process we were creating.

Findings and Effects

  • Annual goal-setting sessions helped focus teachers' efforts and helped them make progress.
  • The system increased meaningful dialogue between teacher and evaluator.
  • Using artifacts of some aspects of teaching in addition to classroom observations gave the evaluator a more complete picture of the teacher's performance.
  • The specificity of the rubrics describing the teacher performance standards made the new system a vast improvement over the previous one.
The written comments revealed that the majority of the 675 experienced teachers who responded were revitalized by the reflection the new system encouraged and by the confirmation that their expertise could be stated in descriptive terms rather than in glowing, but vague, generalities. They appreciated the increased control they felt in determining the outcome of their performance ratings and expressed renewed motivation toward personal improvement. Several teachers noted that they felt prepared to pursue National Board certification. The 80 responding novice teachers—whose surveys gave the highest overall ratings—felt secure in knowing what the district expected and what the indicators of success would be.
Some respondents expressed dissatisfaction with certain aspects of the new system. The task force analyzed the data and found that negative comments were less focused on the system's structure or content than on the way it was implemented in schools. For example, some teachers were disappointed with the goal-setting component because it seemed like extra work and was not taken seriously by the principal. A minority of administrators expressed dissatisfaction with not receiving adequate training for identifying specific teacher behaviors or using the reporting forms. Principals found that the new system was more productive and efficient during the second year of the test.

Next Steps

When members of the teachers' and administrators' associations presented the field test's positive outcomes, the district's board of trustees unanimously voted to adopt the system, also recommending that the feedback spiral continue. We have begun to pilot the next phase, which allows experienced teachers to select professional growth options, such as action research or mentoring (Danielson & McGreal, 2000). Counselors, school nurses, librarians, and teachers of English as a second language and special education have customized the teacher performance standards to better fit their responsibilities and are now field-testing them. The district has implemented a formal teacher improvement plan for teachers needing assistance. Principals have asked the district to design a similar evaluation system for administrators.
Educators from the other Nevada school districts have watched our process closely. Three districts are currently piloting similar systems. We have also presented an overview of the process to the Nevada school superintendents, all of whom intend to adapt our system to their local districts.
Adopting this new system has brought us to the forefront in fostering teacher effectiveness. Our staff development focus for teachers increases competence and expertise in the specific teaching skills described in the four domains and supports administrators in coaching teachers toward reflection and self-direction.
In a very short time, we changed evaluation in Washoe County from a process that merely met the requirements of the law to a progressive, dynamic process that answers our initial questions with a resounding "Yes!"

Costa, A. L., & Kallick, B. (1995). Assessment in the learning organization: Shifting the paradigm. Alexandria, VA: ASCD.

Costa, A. L., & Kallick, B. (2000). Assessing and reporting on habits of mind. Alexandria, VA: ASCD.

Danielson, C. (1996). Enhancing professional practice: A framework for teaching. Alexandria, VA: ASCD.

Danielson, C., & McGreal, T. L. (2000). Teacher evaluation to enhance professional practice. Alexandria, VA: ASCD.

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
Discover ASCD's Professional Learning Services
From our issue
Product cover image 101034.jpg
Evaluating Educators
Go To Publication