In recent years, assessment data have begun to play a pivotal role in education policy and practice. No Child Left Behind (NCLB) requires states to implement standardized assessment-based systems to evaluate their schools. The NCLB approach rests on the assumption that assessment data can provide credible information to gauge how effectively schools and teachers are serving their students.
Educators, however, recognize that because students come to school with different backgrounds, one-time assessment scores are not a fair way to compare teachers with one another when they work under vastly different circumstances. We therefore need new methods for evaluating the effectiveness of teachers and schools—methods that differ from the typical NCLB approach.
The Purpose of Value-Added Assessment
Value-added assessment, a statistical process for looking at test score data, is one technique that researchers have been developing to identify effective and ineffective teachers and schools. In contrast to the traditional methods of measuring school effectiveness (including the adequate yearly progress system set up under NCLB), value-added models do not look only at current levels of student achievement. Instead, such models measure each student's improvement from one year to the next by following that student over time to obtain a gain score. The idea behind value-added modeling is to level the playing field by using statistical procedures that allow direct comparisons between schools and teachers—even when those schools are working with quite different populations of students.
The end result of value-added assessment is an estimate of teacher quality, referred to as a teacher effect in the value-added literature (Ballou, Sanders, & Wright, 2004). This measure describes how well the teacher performed in improving the achievement of the students in his or her class and how this performance compares with that of other teachers.
Value-added models have surfaced as an important topic among education policymakers, researchers, and practitioners. U.S. Secretary of Education Margaret Spellings has organized a federal working group to investigate how such models might be incorporated into NCLB. The Government Accountability Office is investigating the integration of these models into state test-based accountability systems. There is also great interest in value-added assessment at the state level, with at least three states—Ohio, Pennsylvania, and Tennessee—using value-added assessment statewide.
The Emerging Research Base
As value-added modeling assumes a larger role in education, its research base is also flourishing. The following three topics in this field are of special interest to educators.
The Complex Statistical Machinery
Ever since the inception of value-added models, educators have expressed concern that such models are too statistically complex and difficult to understand (Darlington, 1997). However, in 2004, a team of researchers at RAND brought a great deal of clarity to the value-added discussion (McCaffrey, Lockwood, Koretz, Louis, & Hamilton, 2004). Their research documented an array of statistical approaches that can be used to analyze assessment data and discussed the benefits and limitations of each model.
Some researchers have compared the results obtained from complex statistical models with those obtained from much simpler models. Tekwe and colleagues (2004) claimed that “there is little or no benefit to using the more complex model” (p. 31). However, their study relied on a narrow data structure, which may have seriously limited its conclusions. Most value-added approaches remain highly technical, and there is little conclusive evidence that simpler designs are just as efficient as more complex designs.
Although the RAND report helped clarify the statistical methods used in value-added models, and value-added software programs are becoming more widely available (Doran & Lockwood, in press), implementing such a model remains complex. For this reason, schools and school districts that are interested in value-added modeling need to collaborate with professional organizations experienced with the challenges of this method.
Test Scores and Vertical Scales
In many areas of scientific research, measuring growth is straightforward. To measure changes in temperature, we need only consult a thermometer. Measuring change in student achievement, however, is not as simple.
For value-added modeling to work, tests must be vertically scaled (Ballou et al., 2004; Doran & Cohen, 2005). Essentially, vertical scaling is a statistical process that connects different tests and places them on the same “ruler,” making it possible to measure growth over time. For example, one cannot measure a child's height in inches one year and in meters the next year without adjusting the scale.
To connect different tests and measure student growth, designers of value-added models commonly assume that the curriculum in higher grades is nothing more than a harder version of that in the previous grade; in other words, 8th grade math is the same as 7th grade math, just more difficult. Therefore, one can measure a student's increase in math knowledge by measuring his or her academic growth over time.
A large body of research, however, suggests that year-to-year curricular variation is significant (Schmidt, Houang, & McKnight, 2005). Other researchers have demonstrated that the process used to create the vertical scales is a statistical challenge in itself and can actually introduce more error in longitudinal analyses (Doran & Cohen, 2005; Michaelides & Haertel, 2004).
These findings suggest that value-added modeling may need to evolve into newer forms. The research emerging in this area is too new, however, to allow solid conclusions.
Identifying Teacher Effects
Possibly the most important question about value-added assessment is whether the estimate obtained from a value-added model can actually be called a teacher effect. Can any statistical model really sift through all the other factors that may have influenced the student's score (for example, socio-economic status or early learning environment) and isolate the learning that we can specifically attribute to the teacher's methods? As it currently stands, no empirical research validates the claim that value-added models accurately identify the most effective teachers. The many anecdotal claims have not yet been verified through experimental research.
Educators Take Note
The research base on value-added methods is growing, and researchers are developing new approaches in an effort to make this technique more credible and useful to schools. Value-added modeling is an important new area of research—one that is playing a rapidly growing role in shaping assessment and accountability programs.