These days, a fact is not a fact until the media have anointed it, and in the case of education, the media have decided to bless only the negative. Indeed, I have concluded there is a media conspiracy against good news. Consider two similar events and the contrasting coverage they drew.
In February 1992, the U.S. Department of Education joined Educational Testing Service at a well-attended press conference to release a report comparing the mathematics and science performances of 9- and 13-year-olds in various countries (Lapointe et al. 1992a,b). American 9-year-olds finished third in science, but our students ranked low in the other comparisons. The story played big. “An F in World Competition” shrieked Newsweek in a full-page story (1992).
Now consider this second event. In July 1992, the International Association for the Assessment of Educational Achievement (IEA) released a comparison of reading comprehension that involved 200,000 students (Elley 1992). American 9-year-olds finished second in reading among 31 nations. Our 14-year-olds finished 9th, with scores almost as close to first place as the 9-year-olds. (On a 600-point scale, identical to that of the SAT, the second-place French students scored only 14 points higher [549] than the American kids [535]. Finnish students came in first at 560.)
You say you didn't hear much about this report? If you blinked between September 29 and 30, when it ran in Education Week and USA Today, you likely haven't seen it at all (Rothman 1992, Manning 1992). Archie Lapointe, director of the ETS study that got so much play, told me that “bells should have gone off all over the country when the reading study was released.” Not a chime, nary a tinkle.
The story would likely still be invisible had not Education Week reporter Robert Rothman received a copy of the report from a European friend. USA Today picked up on the Education Week piece but also included a telling comment by then Deputy Assistant Secretary of Education Francie Alexander, “This is OK for the '80s, but for the '90s and beyond, kids are going to have to do better.”
An Aura of Failure
The media are not shy when it comes to spreading bad news about education; bad news sells. An aura of failure has so come to surround the schools that even friends of education misinterpret data. Reporting on the IEA study, The American School Board Journal carried this headline: “Good News: Our 9-Year-Olds Read Well. Bad News: Our 14-Year-Olds Don't” (1992). One can wonder how, among 31 countries where a 16th-place finish would be average, a 9th-place finish could be bad. If we disregard the rankings, which obscure performance, we find that American 14-year-olds are as close to first place in terms of scores as the 9-year-olds.
Even reputable publications fail to see through the pall of failure. On February 10, 1993, the editors of Education Week, assessing the years since A Nation At Risk, penned these words: The proportion of American youngsters performing at high levels remains infinitesimally small. In the past 10 years, for instance, the number and proportion of those scoring at or above 650 on the verbal and math section of the Scholastic Aptitude Test has actually declined (“From Risk to Renewal,” 1993).
I've lived with SAT data for a long time, and the numbers presented looked goofy to me. So, I checked my College Board Profiles of College-Bound Seniors against Education Week's numbers. For 1982 the numbers checked out, but for 1992 Education Week had taken only those students scoring between 650 and 690, thus omitting everyone scoring between 700 and 800. The number of high scorers had actually increased from 29,000 to 32,903 on the verbal section and from 76,000 to 140,401 on the math (The College Board 1992).
Naturally, I fired off a letter to Education Week, recommending a highly visible correction. The editors published the letter but no retraction. Under the letter they stated that even if the number of high scorers was up, the proportions were not. I thus had to write a second letter explaining that while the proportion of high scorers on the SAT verbal remained a constant 3 percent, the proportion of scores above 650 on the mathematics section rose from 7 percent to 10 percent, a 50 percent increase.
I also noted that in normal distributions like those imposed on the SAT, few people score well. It is a feature imposed by the test maker, not a reflection of poor performance. Indeed, the proportion of the northeastern, white, and mostly male students who set the standards on the SAT in 1941 had only slightly less than 7 percent of their number scoring above 650. A 50 percent growth at this upper extreme of a bell-shaped curve is an enormous increase.
No wonder most parents tell pollsters that their kids' schools are OK, but that the schools are in crisis: The media show no interest in good news, and publications that should know better misinterpret data or report inaccurate information.
Trying to Get the News Out
There are problems screaming for attention in education, but there is also good news. Four reports I have published in Phi Delta Kappan amassed mountains of evidence that showed schools performing as well as ever, better in many respects (Bracey 1991, 1992, 1993, 1994). Given the severe decline in other social institutions, schools are performing better than we have any reason to expect: Fordham University's Index of Social Health, which combines 17 social indicators into a single number, is at an all time low (Institute for Innovation in Social Policy 1992). Since the only education indicator in the Index is the improving dropout rate, education is not dragging the index down.
Yet the media persist in focusing on the negative. Granted, the first Bracey Report received considerable attention. Washington Post columnist William Raspberry summarized the report (1991), and Education Week (Rothman 1991) and USA Today (Kelly 1991) both accorded my report front-page treatment. Both publications, however, added the independent, corroborative research of Iris Rotberg, a RAND Corporation researcher; Harold Hodgkinson, a demographer of the Institute for Educational Leadership; and engineers at Sandia National Laboratories. Education Week labeled us “revisionists,” and the New York Times, reporting our findings six months later, called us “renegade researchers”—in other words, people not to be taken seriously (Chira 1992).
We renegade researchers analyzing important policy issues got fewer pages of print than a couple of kids caught cheating on the SAT. That weighty topic merited front-page ink in both the New York Times and the Washington Post, two follow-ups in the Post, and an extensive story in People.
My efforts to interest the media in my reports have proven largely fruitless. The few times I have managed to reach a reporter, I have been taken aback by their willingness to first ignore the work, then argue rather than listen. One receptive reporter said that my reports couldn't be treated as news because there was insufficient time to present them and the necessary rebuttals. The message was clear: a negative report on education can be trusted; rejoinders must accompany the positive.
Clinging to the Negative
Many articles that present education's failure as a foregone conclusion are not subject to review. Columnists George F. Will of the Washington Post and Robert J. Samuelson of Newsweek assume that the schools have failed and say so offhandedly in articles about other topics. Newsweek named former Secretary of Education William Bennett to its 100-member “Cultural Elite” solely on the grounds that he was the first person to throw the spotlight on our “dismal” schools.
Nor do the media show much interest in correcting errors. In his column of August 26, 1993, George Will wrote, “Nationally, about half of urban public school teachers with school-age children send them to private schools.” A month later, columnists Jack Anderson and Michael Binstein put the figure at 40 percent of all teachers (1993). Both statistics are utterly false. I tracked down the data and found the figure to be 21 percent for urban teachers and 16 percent for all teachers (Doyle and Hartle 1986). Thus far, no newspaper outside of education has shown any inclination to publish my rebuttal. Only the San Francisco Chronicle accepted it—as a letter to the editor.
I don't take this neglect personally. David Berliner of Arizona State University has had similar problems getting the message out. Berliner debunked 12 myths about American education, including “the United States is an enormous failure in international comparisons of educational achievement” (1993), but an editor at The Atlantic told him that publishing his piece would go against 10 years of Atlantic editorial policy.
Richard Jaeger of the University of North Carolina at Greensboro also concluded that the sky is not falling; compared to other social indicators such as the infant mortality rate and the low birth-weight rate, we do pretty well on test scores (1992). Like Berliner's work, Jaeger's research has been published only in professional journals.
Do education reporters ever read these periodicals? The likely answer is no. George Kaplan trenchantly paints a grim picture of the education beat as lacking in prestige and education reporters as lacking in knowledge and understanding (1992).
My experiences and those of others raise serious questions about the communication of information. Why do some “facts” slip easily into the popular culture while others that contradict them are rejected outright? Consider, for example, the “fact” that only 1 percent of American children perform as well in mathematics as 50 percent of Japanese children. Although incorrect, this “fact” is something that everyone “knows.” Conservative think-tank member Denis Doyle cited this statistic when I debated him. Marc Tucker tossed it into a rebuttal to my comment in a National Public Radio interview. And a few days later, the erroneous statistic turned up in the Washington Post op ed column by Jessica Matthews (1992).
That only 1 percent of American children perform as well in math as 50 percent of Japanese children is a finding from the fatally flawed research of Harold Stevenson at the University of Michigan (Stevenson et al. 1985). He claims that Asian students consistently outperform American students in mathematics and reading (Stevenson and Stigler 1992), but no one else has obtained similar results. The IEA studies have found that the scores of the upper 5 percent of virtually all countries are almost identical. In some areas, the American 95th percentile is higher than that of some countries whose average score is higher than those of U.S. kids (Lapointe et al. 1992a,b).
University of Illinois Professor Ian Westbury's reanalysis of the Second International Mathematics Study found that students in the upper 50 percent of American classes scored at least as well and often better than students in the upper 50 percent of Japanese classes (1993). (The lower half of American students, however, scored poorly.) An earlier analysis found the top 20 percent of American students scoring slightly better than the top 20 percent of Japanese students (1992). While Stevenson's false statistic has popped up everywhere, Westbury's findings have not been mentioned outside of the professional literature.
Epilogue
With the appearance of the “Third Bracey Report,” I resumed my role as squeaky wheel for the other side of education's story. From 1992 to 1993, the number of high school seniors scoring above 650 on the SAT verbal rose by 2,249 to 35,152, a 7 percent increase, while the number on the math rose by 5,608 to 110,009, a 6 percent increase and a whopping 11 percent of all test takers. The total number of test takers rose by 10,000, or 1 percent. Thus the increase in high scorers is much greater, relatively, than the increase in test takers. My data go back only to 1966, but the 1993 math results constitute an all-time high since then.
Responses to my third report have Were mixed. USA Today not only ignored it, but opened its November 9, 1993, editorial stating, “One report card after another flunks U.S. schools” (“Try Private,” 1993). On the other hand, a long conversation with William Raspberry led to an essay that could hardly have been more positive (1993). As his syndicated feature popped up in papers around the country, so did invitations from local radio talk shows with, I suspect, doubtful impacts. Hosts ranged from astute to buried in their own agendas.
The beat goes on. Two years after the reports that opened this article, the Organization for Economic Cooperation and Development released a study comparing 19 nations on many educational variables. The New York Times read the report as showing that U.S. schools “get the job done” (Celis 1993). No other print or electronic medium carried any story about the report. If a story is written in a vacuum, does it make any noise? Well, one is better than none, but it still leaves me asking the media “Why isn't this news?” It is a question that, so far, no one has felt like answering.