I recently sat with a district administrator eager to understand her district's achievement results. Pages of data and statistical breakdowns covered the table. Looking somewhat helpless, she threw up her hands and asked me, “What do I do with all this?”
Many educators could empathize with this administrator. The experts' tendency to complicate the use and analysis of student achievement data often ensures that few educators avail themselves of data's simple, transparent power. The effective use of data depends on simplicity and economy.
How many students are succeeding in the subjects I teach?
Within those subjects, what are the areas of strength or weakness?
The answers to these two questions set the stage for targeted, collaborative efforts that can pay immediate dividends in achievement gains.
Focusing Efforts
Answering the first question enables grade-level or subject-area teams of practitioners to establish high-leverage annual improvement goals—for example, moving the percentage of students passing a math or writing assessment from a baseline of 67 percent in 2003 to 72 percent in 2004. Abundant research and school evidence suggest that setting such goals may be the most significant act in the entire school improvement process, greatly increasing the odds of success (Little, 1987; McGonagill, 1992; Rosenholtz, 1991; Schmoker, 1999, 2001).
If we take pains to keep the goals simple and to avoid setting too many of them, they focus the attention and energies of everyone involved (Chang, Labovitz, & Rosansky, 1992; Drucker, 1992; Joyce, Wolf, & Calhoun, 1993). Such goals are quite different from the multiple, vague, ambiguous goal statements that populate many school improvement plans.
Turning Weakness into Strength
After the teacher team has set a goal, it can turn to the next important question: Within the identified subject or course, where do we need to direct our collective attention and expertise? In other words, where do the greatest number of students struggle or fail within the larger domains? For example, in English and language arts, students may have scored low in writing essays or in comprehending the main ideas in paragraphs. In mathematics, they may be weak in measurement or in number sense.
Every state or standardized assessment provides data on areas of strength and weakness, at least in certain core subjects. Data from district or school assessments, even gradebooks, can meaningfully supplement the large-scale assessments. After team members identify strengths and weaknesses, they can begin the real work of instructional improvement: the collaborative effort to share, produce, test, and refine lessons and strategies targeted to areas of low performance, where more effective instruction can make the greatest difference for students.
So What's the Problem?
Despite the importance of the two questions previously cited, practitioners can rarely answer them. For years, during which data and goals have been education by-words, I have asked hundreds of teachers whether they know their goals for that academic year and which of the subjects they teach have the lowest scores. The vast majority of teachers don't know. Even fewer can answer the question: What are the low-scoring areas within a subject or course you teach?
Nor could I. As a middle and high school English teacher, I hadn't the foggiest notion about these data—from state assessments or from my own records. This is the equivalent of a mechanic not knowing which part of the car needs repair.
Why don't most schools provide teachers with data reports that address these two central questions? Perhaps the straightforward improvement scheme described here seems too simple to us, addicted as we are to elaborate, complex programs and plans (Schmoker, 2002; Stigler & Hiebert, 1999).
Over-Analysis and Overload
The most important school improvement processes do not require sophisticated data analysis or special expertise. Teachers themselves can easily learn to conduct the analyses that will have the most significant impact on teaching and achievement.
The extended, district-level analyses and correlational studies some districts conduct can be fascinating stuff; they can even reveal opportunities for improvement. But they can also divert us from the primary purpose of analyzing data: improving instruction to achieve greater student success. Over-analysis can contribute to overload—the propensity to create long, detailed, “comprehensive” improvement plans and documents that few read or remember. Because we gather so much data and because they reveal so many opportunities for improvement, we set too many goals and launch too many initiatives, overtaxing our teachers and our systems (Fullan, 1996; Fullan & Stiegelbauer, 1991).
A simple template for a focused improvement plan with annual goals for improving students' state assessment scores would go a long way toward solving the overload problem (Schmoker, 2001), and would enable teams of professional educators to establish their own improvement priorities, simply and quickly, for the students they teach and for those in similar grades, courses, or subject areas.
Using the goals that they have established, teachers can meet regularly to improve their lessons and assess their progress using another important source: formative assessment data. Gathered every few weeks or at each grading period, formative data enable the team to gauge levels of success and to adjust their instructional efforts accordingly. Formative, collectively administered assessments allow teams to capture and celebrate short-term results, which are essential to success in any sphere (Collins, 2001; Kouzes & Posner, 1995; Schaffer, 1988). Even conventional classroom assessment data work for us here, but with a twist. We don't just record these data to assign grades each period; we now look at how many students succeeded on that quiz, that interpretive paragraph, or that applied math assessment, and we ask ourselves why. Teacher teams can now “assess to learn”—to improve their instruction (Stiggins, 2002).
A legion of researchers from education and industry have demonstrated that instructional improvement depends on just such simple, data-driven formats—teams identifying and addressing areas of difficulty and then developing, critiquing, testing, and upgrading efforts in light of ongoing results (Collins, 2001; Darling-Hammond, 1997; DuFour, 2002; Fullan, 2000; Reeves, 2000; Schaffer, 1988; Senge, 1990; Wiggins, 1994). It all starts with the simplest kind of data analysis—with the foundation we have when all teachers know their goals and the specific areas where students most need help.
What About Other Data?
In right measure, other useful data can aid improvement. For instance, data on achievement differences among socio-economic groups, on students reading below grade level, and on teacher, student, and parent perceptions can all guide improvement.
But data analysis shouldn't result in overload and fragmentation; it shouldn't prevent teams of teachers from setting and knowing their own goals and from staying focused on key areas for improvement. Instead of overloading teachers, let's give them the data they need to conduct powerful, focused analyses and to generate a sustained stream of results for students.