Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
February 1, 2001
Vol. 58
No. 5

Using More Data Sources to Evaluate Teachers

More performance measures and teacher choice about which measurements to use can improve the evaluation process for both teachers and administrators.

In Utah's Davis School District, we believe that our community deserves concrete and compelling data about the good teaching and high levels of learning in our schools. We want to be accountable and to compete well for scarce public resources.
We also want the performance data we collect for evaluating our schools and teachers to acknowledge the achievements of our more than 3,000 educators; to assure parents, legislators, and school board members that our teachers are teaching well; and to highlight the district's best practices for emulation by other teachers. We seek to accomplish these goals without bogging down administrators with many hours of supervisory details. By giving our administrators plenty of time to perform the important tasks of instructional leadership, we benefit from their insights into how teachers achieve specific results.
Six years ago, few school districts in the United States had effective systems for the difficult task of districtwide teacher evaluation. We—the Davis district educator assessment committee—decided to improve our previous work on teacher evaluation by creating an innovative, yet conservative, teacher evaluation program. Since then, we have increased the amount and kind of information we collect, enhanced the nature of teacher choice in the evaluation process, and developed an infrastructure of roles and resources to support teacher evaluation (Peterson, 2000). In gathering these data, however, we have adhered closely to what frontline teachers think is a good assessment of their work.

Improved Data Gathering

We also use direct measures of teacher performance, preparation, and results. In our system, teachers who are compiling data for the evaluation process can choose among several data sources, including parent surveys, student surveys, student achievement data, documentation of professional activity, and teacher tests, such as the Graduate Record Exam or Praxis series of tests. Other sources of data that we accept include reports by administrators, action-research results, results of school improvement programs, and licensing by the National Board of Professional Teaching Standards.
Although subjective expert judgments are an important part of personnel decisions, these decisions should also rely on objective information on teacher performance. In the past, evaluators may have relied on principals' perceived estimates of such information, but now we encourage teachers to call for systematic documentation of performance indicators. For example, we can use direct surveys of parents rather than ask principals for their perceptions of parents' opinions of specific teachers. And we can use what teachers choose as specific measures of student achievement rather than rely on principals' estimates of student achievement.
Another improvement in our data gathering is a more complex infrastructure for supporting the evaluation process. We formed the district educator assessment committee, which advocates good evaluation practices and policies and makes specific recommendations for action. The committee has an equal number of teachers and administrators and reports to the superintendent, the teachers' organization, and the district staff.
The committee's policies are in turn carried out by a district evaluation unit, which helps teachers gather data by preparing, administering, and scoring all survey forms and teacher measures. The administrative leader, clerk, and paid data gatherers of this group save teachers time. For example, teachers do not have to spend hours making, gathering, and interpreting student surveys; the evaluation unit developed a student survey that takes teachers only six minutes to use. The support from the new district infrastructure also saves administrators time and adds credibility to the data. These developments have all been cosponsored by the teachers' organization and the district administration.
In the process of analyzing and studying our program, members of the district educator assessment committee have published some of what we have learned, such as a study of student surveys (Peterson, Wahlquist, & Bone, 2000), and have presented our findings at professional conferences. This study activity helps us refine, improve, and validate our program.

Teacher Choice

Allowing teachers to choose which data to present for evaluation is an important feature of our program. Because some teachers believe that students are not an appropriate source of information or that student achievement data cannot be fairly gathered, we do not mandate either of these kinds of information for every teacher in the district. If we required all teachers to use a single data source, conflict inevitably would arise if even a few unfair applications of the data occurred. After a few misapplications of the data, teachers would begin to resist the entire use of that data source.
Teachers have two levels of choice and protection from inappropriate application of data in our system. First, individual teachers must call for the data to be gathered. Second, each teacher inspects the results before adding the information to his or her set of evaluation data.
The only non-negotiable requirement is that teachers must present a certain number of data sources at each stage of their evaluation career. For example, beginning teachers must submit a minimum of four data sources per year, whereas veterans use only one to three data sources. We know that good teachers are good for a variety of reasons, and we want the evaluation data to reflect each teacher's individual strengths. Also, a single data source cannot possibly work well for all teachers, and not all data are available for all teachers.
When we ask our teachers to become more involved in their evaluations, we tell them that the approach to evaluation has changed. We say, "Rather than finding out how high you rate on one fixed measure, we want you to document your particular impact, merit, and value. Think about what you accomplish in your teaching, make your claims, and then let us help you document with objective data the fact that you are doing a good job."
We avoid teacher conflict, competition, and justifiable subversion by not cornering teachers who oppose certain data sources. To teachers who seriously object to any single data source—parent surveys, for example—we say, "Fine, don't use them—but do use the data that you are enthusiastic about." This strategy has worked. We have increased the percentage of teachers using student surveys from 18 percent in 1998 to 38 percent in 1999. We have increased the percentage using parent surveys from 18 percent in 1998 to 23.4 percent in 1999. These levels of use contrast sharply with those of districts that use no systematic information. Our level of choice is so high that we even support traditional clinical supervision if either the principal or the individual teacher calls for it.

Reactions from Teachers and Principals

So far, we have enjoyed favorable responses about the evaluation program. One recent survey showed that 84.5 percent of our teachers liked the new techniques. In addition, 82.5 percent said that they enjoyed the increased teacher control. The same percentage indicated that the new system made them more reflective about their teaching. Interviews and focus groups with principals revealed similar high levels of acceptance. In particular, principals liked being released from nonessential formal classroom visits and approved of the opportunity to review specific data about individual teacher performances.

Using Student Achievement Data

Our evaluation system does not focus exclusively on student achievement data. We also use information from the teaching process, including the teacher's materials and documentation of teacher preparation, such as professional development and advanced degrees. Nonetheless, the district highly favors achievement data, especially measurable student gains. We recognize that getting valid and reliable information on achievement gains is difficult. Our strategy for using student achievement data has been to do the best we can and to seek ways to improve the validity, reliability, and coverage in the future.
Our conservative approach was to begin by using student achievement data that individual teachers claimed should be taken into account. Learning from our experience, we are increasing the numbers and techniques for accurately documenting student achievement. At the same time, we are avoiding the problems of wholesale adoption of radical—but unproven—claims for evaluating all achievement according to a narrow criterion.
We permit teachers to gather achievement data in several ways. Grade-level tests, standardized tests, advanced placement examinations, and even pre- and postinstruction achievement tests designed by the teacher for a particular class can all count. The principal, a peer review team, or an external accreditation team of experts judges the adequacy of these strategies.
Teachers are increasing their use of student achievement data as part of their comprehensive teacher evaluation. In 1998, the percentage of teachers submitting achievement data was 18 percent; in 1999, this percentage grew to 42.3. Our intent is to increase the number of teachers who use these data.
We recognize the promises of such techniques as value-added assessment, in which extensive annual testing of all students in many subject areas promises to pinpoint the contributions of each teacher to each student, but we think too many teachers cannot qualify for consideration. For example, value-added assessment requires three years of student test data, but we need to evaluate primary grade teachers who cannot possibly have such information about their students. And we are not convinced of the value-added assessment claim that three years of general science tests predict how well students will perform in chemistry or physics.
Further, we know that although academic achievement is important, our teachers are expected to foster many more kinds of achievement; we also expect student growth in citizenship, occupational development, and affective qualities like civility and persistence. Finally, we recognize the many instances when achievement occurs but cannot be accurately measured. For example, our art, physical education, and home economics teachers are expected to foster student growth, but we do not have quantitative measures of student achievement in these subjects.

Data Sets or Portfolios?

In our system, teachers gather their materials in data sets that are limited in size. Each data set is a compressed record of three important elements: a teacher's accomplishments, practice, and preparation. A data set is not a portfolio, which we have found to be unwieldy, awkward, and time-consuming. For the small number of teachers who want to create portfolios, we offer a peer review of such materials so that a portfolio can be included in the data set.
One teacher's data set may contain parent surveys, a documentation of professional activity, an administrator's report, and measures of student achievement. Another data set might contain student surveys, the results of an action-research project, a report of participation in a collaborative investigation with other educators, and the teacher's test scores. Teachers have to submit a minimum number of sources, but almost all end up with more than enough data sources to document their successful practice. As we intended, teachers show a variety of patterns of success and many different kinds of accomplishments, practices, and preparations.

Who Makes the Decisions?

We still rely on the judgment of the building principal to evaluate our teachers. We, however, have placed more extensive, accurate, objective, teacher-controlled data on the table for the principal's evaluation.
We continue to face the challenge of getting more and better evaluation data for the majority of our teachers, those who are doing wonderful jobs. We also use a comprehensive process for the separate, uncommon problem of —al performance.
A teacher's final annual appraisal is a two-page form filled out by the building principal. The form first indicates where the principal has received data about the teacher, including the data sets and possibly informal visits or conversations with the teacher. Next, the form lists 14 performance areas. Principals use their expertise and the data sets to grade the teacher's performance in each area as well functioning, needs attention, or unsatisfactory. The final sections give room for recommendations—required in the case of needs attention or unsatisfactory ratings.
In our district's previous evaluation models, which relied on classroom visits and clinical supervision, critical remarks about teachers were rare unless the problems were exceptionally severe. With the new system, the occurrence of needs attention ratings has gradually increased to about 2.5 percent, which is approximately half of the national average (Peterson, 2000). Even at these low levels of needs attention ratings, the new procedures appear to be an effective means of monitoring teachers and maintaining accountability.

A Changed Role for Principals

These improvements in our data gathering have changed our expectations of principals. By not having to conduct the extra data gathering, they have time to do more monitoring. The limited payoffs of formal classroom visits and clinical supervision have been replaced by more frequent walkthroughs (69 percent in 1998, 83.9 percent in 1999) and greater attention to beginning and —al teachers. We want our principals to do what they do best: be instructional leaders in their buildings.
We are proud of our accomplishments. Our teachers take tests as part of their summative evaluation, whereas teachers in other districts do not. We survey parents and students, whereas collecting such data has become a controversial topic in many parts of the country. We are working hard to document student achievement and to avoid the rush to unproven promises of evaluating all teachers on the basis of student achievement test scores. Finally, we recognize that serious teacher evaluation is not cheap. We continue to study and control our costs for this substantially increased data gathering.

Peterson, K. D. (2000). Teacher evaluation: A comprehensive guide to new directions and practices. Thousand Oaks, CA: Corwin Press.

Peterson, K. D., Wahlquist, C., & Bone, K. (2000). Student surveys for school teacher evaluation. Journal of Personnel Evaluation in Education, 14(2), 135–153.

Kenneth D. Peterson is Professor of Education at Portland State University, Oregon. He has previously taught at the University of California, Berkeley and at the University of Utah. A former classroom teacher himself, Prof. Peterson has been teaching for 30 years, is on the editorial review board of the Journal of Personnel Evaluation in Education, and is president of the Consortium for Research in Educational Accountability and Teacher Evaluation (CREATE).

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 101034.jpg
Evaluating Educators
Go To Publication