HomepageISTEEdSurge
Skip to content
ascd logo

Log in to Witsby: ASCD’s Next-Generation Professional Learning and Credentialing Platform
Join ASCD
November 1, 2003
Vol. 61
No. 3

Learning from Student Assessment Results

How can schools use assessment data to improve student achievement?

Learning from Student Assessment Results - thumbnail
In mid-March, Harrison High School'sinstructional leadership team took its first look at scores on the district-mandated writing samples that students had completed the previous October and January. The purpose of the writing sample assessments was to help teachers gauge their students' readiness to succeed on the writing portion of the state-mandated Massachusetts Comprehensive Assessment System (MCAS) exams. As part of an ongoing project, we had helped the school in its efforts to get the data in shape, a time-consuming task that included updating student rosters, entering scores, and matching students to teachers. We were proud of the results of our hard work.
Yet many of Harrison High's teachers did not share our enthusiasm. As the team studied the scores, we heard, “I don't think that teachers used the rubric to grade these,” “The rubric was never really clear to teachers,” and “So what does this all mean?” Such comments made it clear that complying with the well-intended district mandate and providing data to teachers were not enough. The school needed to go further and directly address the obstacles that might prevent teachers from using student assessment results effectively to improve instruction.
The No Child Left Behind Act dramatically increases the importance of learning from student assessment results. The legislation requires educators to disaggregate achievement data, track the achievement of all students over time, and show demonstrable progress in raising the percentage of students who are proficient in math and reading. School staffs will need to examine assessment results to identify problems and plan appropriate instructional interventions (Houlihan, 2002; Mason, 2002). Implicitly, the legislation assumes that teachers and administrators know how to learn from student assessment results and that they have the time and support that they need to do so. Our work in public schools, however, leads us to question these assumptions.
For the past two years, we have worked to help teachers and administrators in Boston Public Schools learn from student assessment results. As part of this ongoing project, we worked intensively with the staff of Harrison High School to collect, analyze, interpret, and disseminate student assessment results. We also interviewed teachers and administrators at five other high schools, including two small pilot schools.
On the basis of our interviews and work with school staffs, we have identified three types of challenges that schools face when they attempt to use assessment results to improve instruction. In each of these areas—technology, knowledge, and opportunity—school leaders can take steps to build the capacity of their staffs to make the best use of assessment data.

Technology Challenges

Because time is the scarcest resource in most schools, teachers and administrators will learn from student assessment results only if they can obtain useful data quickly. Therefore, they need access to user-friendly software that provides intuitive graphic summaries of assessment results for specific groups of students. Student mobility and the fluidity of school organization make providing such data difficult.
As many as 40 percent of the students in Boston high schools in May are different from those attending the same schools the previous September. As a consequence, student databases must be frequently updated to reflect student mobility—a time-consuming task, especially if individual schools handle this work.
Typically, storing student assessment results on a districtwide database reduces the cost of keeping records up to date. Districtwide databases, however, do not usually provide information on instructionally relevant groups of students in individual schools. For example, the headmaster of a Boston high school reported that she would like to be able to examine test scores for students participating in MCAS review sessions:Who are these students? Are they concentrated in any specific classes with any specific teachers? Are they the bilingual students? If we could answer these questions quickly, we might be able to use the data to learn about what we're doing and not doing. Without that information, though, it's much harder.
The most useful district database stores the data centrally but enables teachers and administrators to look at test results for specific groups of students in their school, such as those participating in tutoring programs or review sessions. An effective data system also makes it easy for teachers to look at results on end-of-year summative assessments, not only for their current students, but also for the students whom they taught the previous year.
Another technology-related issue concerns the tension between access and privacy. Some teachers want to compare the assessment results of their students with those of other teachers, whereas some teachers resist such comparisons. Many administrators worry about liability in giving teachers access to potentially sensitive information on students whom they do not teach. Boston's strategy for resolving this tension is twofold. First, in response to teachers' advice, the next version of the district's software will enable teachers to compare the MCAS performances of their students with the averages for students in the district and state. Second, the software enables school principals to give designated members of the school's data team full access to school assessment results.

Knowledge Challenges

Training that schools provide for their teachers and administrators in using data tends to focus on how to use software. Although this training is necessary, our experience has demonstrated that teachers and administrators also need to learn a more difficult set of skills: how to ask instructionally relevant questions of data and how to answer such questions. Most educators have not learned these skills in pre-service training.
This challenge has a “chicken and egg” aspect. Teachers find it difficult to know what patterns of assessment results to ask for without a clear sense of what questions they want to answer. At the same time, they find it difficult to formulate instructionally relevant questions without knowledge of the types of information they could obtain.
For example, a common pattern in MCAS results is that students tend to skip math questions requiring open-ended responses. Does this mean that the students did not understand the questions? That they could not do the math required? Or that they did not know how to write about their responses? To find the answer, teachers must not only think creatively about what patterns of test responses might throw light on why students did not respond to open-ended math questions but also recognize that answering the question may require data obtained by different methods—by interviewing students, for example.
We found enormous variation among Boston school staffs' ability to learn from student assessment data. Some schools analyzed scores solely to comply with accountability requirements, such as reporting by ethnic group the percentage of students who met performance benchmarks. These schools did not see the results as providing information useful for school improvement. On the other hand, some Boston schools analyzed student assessment results to answer crucial questions. For example, one school examined whether students who had math earlier in the day performed better or worse than students who had math later on. The results led the school to alter its master schedule. One of Boston's strategies for tackling the knowledge challenge is to offer workshops in which teams from different schools describe their data-related work and receive feedback.

Opportunity Challenges

School staffs typically have little time during the school day to examine student assessment results. Although some Boston schools have arranged their schedule to provide common meeting times for teachers, this arrangement does not always guarantee sufficient time to discuss student data. Meeting times are often taken up by such matters as discipline and scheduling.
Even when schools do find time for teachers to work together on examining student assessment results, teachers' feelings of vulnerability can jeopardize the activity. Comparisons of results are inevitable and potentially threatening. Teachers at one school initially held workshops about using data but then decided to work individually on looking at their own students' assessment results. The school administrator commented, There was this whole fear that it was going to be used for evaluation . . . and teachers weren't too willing to volunteer to bring their [student assessment results] to the table because they didn't want to be put in the spotlight by their colleagues. You know, “My work is my work; don't anybody look at it; I'm going to shut my door.” It was so brand-new.
Unfortunately, when teachers work by themselves to make sense of student assessment results, what they learn is unlikely to contribute to the creation of a coherent instructional program (Newmann, Smith, Allensworth, & Bryk, 2001). In our work with Boston schools, we have found that using such methods as the tuning protocol (Allen & McDonald, 2003) and question formulation techniques (Right Question Project, 2000) to structure the process of examining student assessment results helps in gaining teachers' support.
Recommendations for Learning from Student Assessment Results

Recommendations for Learning from Student Assessment Results

Overcoming Technology Obstacles

  • Store student assessment data on a districtwide database. Update it frequently to reflect student mobility.

  • Enable school staffs to define instructionally relevant groups of students in the database and to examine assessment results for these groups.

  • Give teachers and administrators software that is easy to use and that addresses the questions that teachers frequently ask, both about the performances of their current students relative to benchmarks and about the performances of the students they taught in the previous year.

  • Design the database system so that school principals can use discretion in resolving the tension between access and privacy.

Overcoming Knowledge Obstacles

  • Prepare teachers and administrators to use the software and to frame and address instructionally relevant questions.

  • Invite teams from schools with expertise in analyzing assessment results to demonstrate to educators from other schools what they have done and how it contributed to instructional improvement.

Overcoming Opportunity Obstacles

  • Build time into the schedule for examining student assessment results.

  • Provide processes for teachers to talk about their students' work (for example, tuning protocols and question-formulation techniques).

 

The Special Case of Formative Assessments

Formative assessmentscan provide timely information on students' mastery of specific skills and the effectiveness of instructional interventions, contributing to effective school improvement strategies (Black & Wiliam, 1998). In addition to the obstacles to learning from student assessment results discussed above, however, formative assessments pose particular challenges.
Scoring presents one such challenge. For example, the district central office in Boston requires high school staffs to administer three open-ended writing prompts each school year. To share the scoring burden, the administration at one district high school mandated that all teachers would participate in the scoring. The English teachers, however, did not believe that their colleagues in other departments were grading the writing prompts accurately and, consequently, did not think the results were worth analyzing. As one English teacher commented, I don't think that all the teachers in the other departments really used the rubric for grading these. . . . Even the kids know that this grading isn't really working and isn't really accurate.
Of course, more teacher training might contribute to more reliable scoring, but schools may not want to add yet another time-consuming activity.
Another challenge arises from the potential differences between the state-developed summative assessments and the district-mandated, school-designed formative assessments, especially if the school's curriculum focus is not well aligned with the state curriculum standards. At one Boston high school, for example, teachers believed that they had made great strides in student reading and writing but that the MCAS exam questions—one focusing on special effects in the movie Titanic and one focusing on stage directions in Arthur Miller's Death of a Salesman —did not give students the chance to demonstrate their reading and writing capabilities.
The question of whether central office staff or educators in individual schools should design formative assessments often reflects the tension implicit in many current education reforms between the desire for centrally monitored and supported accountability systems and the push toward greater school decentralization and independence and teacher professionalism and collegiality. During the 2002–2003 school year, Boston maintained an uneasy balance: All schools were required to demand writing prompts three times a year, but individual schools could choose their prompts and their scoring rubrics. This approach had the potential to empower schools to make formative assessments part of their school improvement strategies. But the variation in prompts and scoring rubrics also dramatically increased the challenge of providing electronic support to schools for analyzing scores on writing prompts.

Improving the Quality of Education

For many U.S. schools, complying with the requirements of No Child Left Behind will increase the amount of time devoted to testing and decrease the time available for instruction. If these schools do not make constructive use of their test results, the net effect of the legislation will likely be a reduction in student learning.
The increased emphasis on student testing will improve the quality of education only if educators use student assessment results effectively. If we want all schools to use test results to inform meaningful schoolwide instructional improvement, we must find ways to overcome the technology, knowledge, and opportunity challenges inherent in this work.
References

Allen, D., & McDonald, J. (2003). The tuning protocol: A process for reflection on teacher and student work [Online]. Available: www.essentialschools.org/cs/resources/view/ces_res/54

Black, P., & Wiliam, D. (1998). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan. [Online]. Available: www.pdkintl.org/kappan/kbla9810.htm

Houlihan, T. (2002, July). Reporting for No Child Left Behind: What's the impact on education ? Opening remarks at Stats-DC 2002 NCES Forum and Summer Data Conference, Common Data Common Goals, Washington, DC.

Mason, S. A. (2002, July). Data for learning: The role of data in professional learning communities. Unpublished AERA conference proposal.

Newmann, F. M., Smith, B., Allensworth, E., & Bryk, A. (2001). Instructional program coherence: What it is and why it should guide school improvement policy. Educational Evaluation and Policy Analysis, 23(4), 297–321.

Right Question Project. (2000). The Right Question Project education strategy [Online]. Available: www.rightquestion.org/edstrat.html

End Notes

1 The school's name is a pseudonym.

2 Formative assessments are those used by teachers to adjust their instruction to meet student needs throughout the school year. Summative assessments are typically given at the end of a course of instruction.

Richard J. Murnane has been a contributor to Educational Leadership.

Learn More

ASCD is a community dedicated to educators' professional growth and well-being.

Let us help you put your vision into action.
From our issue
Product cover image 103387.jpg
The Challenges of Accountability
Go To Publication