Sometimes, teachers need help making the connections among assessment, data, and instruction.
Premium Resource
Like "Tinker to Evers to Chance," the double play in baseball that inspired poetry and delighted Cubs fans, some things that look easy are actually the result of hard work and thoughtful orchestration. Such is the case with moving from formative assessment to data analysis to effective literacy instruction.
The main goal of formative assessment is to inform teachers of the active knowledge, skills, and strategies their students have mastered and to point to instruction that will move students farther along the path to learning. Yet the process of translating assessment into instructional decision making is far from easy.
As coaches and literacy educators, we have closely examined how literacy coaching can help teachers learn how to use assessment data to make instructional decisions. Two districts that we have worked with illustrate this process (Blachowicz, Buhle, Frost, & Bates, 2007). In both cases, educators used data from the Illinois Snapshots of Early Literacy (ISEL), a standardized literacy assessment that provides a comprehensive overview of a child's competencies in a variety of literacy skills needed for success in reading. And in both cases, teachers initially could not make effective use of ISEL data because they failed to make connections between what the data told them and what they knew about instruction.
Using Beginning-of-the-Year Data to Develop Curriculum
The term silo communication describes an organizational environment in which people or groups do not communicate with other people or groups within the organization. Instead, each person or department tends to operate as a separate entity, frequently making decisions that do not take other aspects of the organization into consideration.
An even more complex type of silo communication is the lack of communication within a person or group's thinking—specifically, the tendency not to connect one known body of information with another. We see teachers thinking in this fashion when they appear to disregard assessment results when they make instructional decisions.
Our coaching work suggests that instructional improvement requires more than just presenting the data and expecting it to automatically transform teachers' thinking. Rather, teachers may need sensitive coaching and facilitation to study their data and to engage in the kind of problem solving and root analysis of progress that helps them build bridges between data and instructional decision making (Bernhardt, 2004; Glickman, 2002).
We observed such coaching in one large, high-performing district where the Illinois Snapshots of Early Literacy was administered to a carefully randomized group of incoming kindergarten students. The district's literacy director presented the results of the assessment to a committee of kindergarten teachers who were charged with using the data to develop and pilot a new literacy curriculum for their grade level. As they viewed the graph shown in Figure 1 (p. 45), the teachers did not seem surprised that their students' average scores either met or exceeded the assessment's targets. They agreed that the summary seemed to fairly represent the majority of their students, although they also agreed that each school had at least one group of students with lower scores.
Figure 1. Kindergarteners' Average Percentage Correct on the Fall ISEL-K/1 in One Illinois School District
After the literacy director had facilitated an extensive and rich discussion of the data, the kindergarten teachers immediately began to develop their plans for literacy activities they would include in their pilot curriculum. Maria, an experienced kindergarten teacher known for her students' high end-of-year literacy outcomes, proposed a whole-group, letter-of-the-week curriculum that included extensive work in letter recognition for all students. She suggested that more complex activities, such as emergent writing, should not be phased in until later in the semester. Several people on the committee agreed with Maria's suggestions; no alternative plan was discussed.
We were perplexed by this discussion. Maria and her group had just seen and discussed the ISEL assessment information, which clearly suggested that extensive whole-group activities in letter recognition were not necessary for many of their incoming students. On the contrary, the students' average and above-average scores in ABC recognition, letter sounds, and phonemic awareness suggested that many beginning kindergarten students in this district could respond positively to earlier opportunities to practice emergent writing.
Further, because of previous professional development activities, the teachers believed that emergent writing was an excellent way to teach multiple literacy skills, such as letter sounds, phonemic awareness, and word recognition. Why weren't the teachers combining that knowledge with the test data they had just discussed?
It was as though the committee's discussion of new kindergartners' high letter-recognition scores on the ISEL was situated in one silo of the teachers' thinking, whereas Maria's extensive internal menu of literacy activities and rich literacy curriculum knowledge were situated in another silo. Neither silo communicated with or informed the other.
How do literacy support staffs and school administrators help well-meaning, well-informed teachers like Maria and her colleagues build bridges between assessment and instruction, two key areas of related knowledge? In this case, the literacy coach chose to prompt the kindergarten teachers with questions like, Do any of these ISEL scores suggest a different kind of literacy opportunity that you could provide for your students at the beginning of the year? Is there something you could do sooner in the year given this assessment information? and, pointedly, What does the district average in ABC recognition tell you about your students?
At this point, an important issue surfaced. Teachers explained that many other parts of their kindergarten curriculum were tied to a letter-of-the-week curriculum, including science and math activities and even free choice options in the students' play areas. Clearly, changing the letter-of-the-week curriculum could have a domino effect, disrupting other aspects of a well-established and successful curriculum.
Respecting how difficult this would be, the literacy coach asked whether the teachers could add a small writing activity at the beginning of the year to see how students responded. The teachers agreed, and they later reported how impressed they were with the sophistication of the students' early writing attempts. Over time, the teachers cut back on their extensive, whole-group ABC recognition and letter-sound instruction, using the time to give all students earlier opportunities to write and providing intense letter instruction only to those students whose ISEL scores indicated that they needed it. In this case, the literacy director had negotiated respectfully and sensitively to construct a bridge between the district's test data and the kindergarten teachers' instructional expertise.
Using Year-End Data to Evaluate Instruction
In another district, a literacy coach worked with a single kindergarten teacher to reflect on her classes' literacy growth, comparing her students' spring average scores on the Illinois Snapshots with the achievement targets set by ISEL, as shown in Figure 2 (p. 46). The coach and teacher hoped to use these data to inform decisions about the following year's curriculum and instruction.
Figure 2. Kindergarteners' Average Percentage Correct on the Spring ISEL-K/1 in One Illinois Classroom
The literacy coach first prompted the teacher to acknowledge the progress her students had made toward the target scores in three areas—ABC recognition, letter sounds, and word recognition—even exceeding the target score in two areas. Through additional questions (Where are the students doing the best? What does this subtest measure? What doesn't it measure?), the literacy coach also helped the teacher see that those areas in which students had reached or exceeded the targets were all measures of individual item knowledge. The areas in which students had fallen short of the targets required students to orchestrate multiple items and perform more complex tasks, such as spelling, matching words in a sentence, and even trying to read a simple book.
Unlike the teachers in the first school, the teacher in this school had only recently been provided with rich, capacity-building professional development. The coach needed to provide more than questions; she needed to explain the difference between item knowledge (like letter recognition) and more complex forms of literacy knowledge.
As in the earlier example, the coach's focus was to build bridges, but this time the necessary link was between the teacher's knowledge of her students' assessment scores and her growing understanding of how literacy works.
The coach offered, "Let me show you how the kind of instruction that helps students orchestrate complex literacy skills might look. We can use a big book or a chart story to talk about it." This discussion encouraged the teacher to think about decreasing the amount of time she was devoting to word decoding and increasing the time she devoted to guided reading and independent reading and writing.
The Role of the Coach
Even if districts use assessment instruments like the ISEL, which is closely tied to classroom practice, it is naïve to believe that teachers will use assessment data to inform instruction without the coaching and support they need to begin the process. Teachers in the two districts discussed here had two separate silos of knowledge for their assessment data and their curriculum and instructional plans. Fortunately, both districts provided coaches who helped teachers begin the conversation that could build bridges between data and instruction.
How are we doing?
What are we doing best?
What do these assessments measure?
What are we missing?
Is this the best we can do?
Where should we place more emphasis?
What do we already do that we can do more of?
Is there something we can do sooner in the year?
Where can we place less emphasis?
Such discussions must always center on student performance and work. Everyone, coach and teacher alike, has the common desire to help students become the best readers they can be. Like Tinkers, Evers, and Chance, teachers and coaches engage in difficult, thoughtful, and focused work. The result is truly formative assessment.
References
•
Bernhardt, V. L. (2004). Data analysis for continuous school improvement. Larchmont, NY: Eye on Education.
•
Blachowicz, C. L. Z., Buhle, R., Frost, S., & Bates, A. (2007). Formative uses of assessment: Cases from the primary grades. In J. Paratore & R. McCormack (Eds.), Classroom literacy assessment (pp. 246–261). New York: Guilford.
•
Glickman, C. D. (2002). Leadership for learning. Alexandria VA: ASCD.
End Notes
•
1 More information about the Illinois Snapshots of Early Literacy is available from the Illinois State Board of Education Web site at www.isbe.net/curriculum/reading/html/isel.htm. The Illinois Snapshots of Early Literacy and manuals can be downloaded at the Reading Center Web site at National-Louis University,www2.nl.edu/reading_center.