Although the No Child Left Behind legislation makes it virtually impossible for schools to avoid thinking about how to measure reading comprehension, few educators or policymakers have considered how Internet technologies affect conventional thinking about reading assessment. Even fewer have tackled the issue of how schools might reliably measure the new skills required to comprehend online text.
Over the last seven years, as a member of the New Literacies Research Team (see www.newliteracies.uconn.edu/team.html), I have analyzed recordings of hundreds of adolescents engaged in reading for information on the Internet. Preliminary evidence from these analyses reveals that reading comprehension on the Internet differs from traditional reading comprehension in at least five important ways. Let's examine these differences and consider how teachers might expand their range of reading assessment practices to capture the skills and strategies students need to comprehend information in the digital age. Difference 1. Students need new skills.
A typical book-based reading assignment asks students to read a common text, answer questions about the main ideas, and respond to these ideas through writing, art, or class discussion. In contrast, a typical Internet-based reading assignment requires students to generate appropriate search requests, sift through disparate sources to locate their own texts, synthesize the most reliable and relevant information within those texts, and respond with online communication tools such as an e-mail message or blog post. Sifting through a vast field of information to find the best sources becomes integral to the reading task.
To complete online reading assignments well, students need new skills beyond those currently measured by standardized tests of offline reading comprehension (Coiro, 2007). In addition to using conventional knowledge of vocabulary and informational text structures, skilled online readers can efficiently use search engines, navigate multilayered Web sites, and monitor the appropriateness of their pathway through a complex network of connected text (Coiro & Dobler, 2007). Moreover, high scores on some online reading tasks correlate weakly with high scores on a standardized test of traditional reading comprehension skills (see Leu et al., 2008). Cases are emerging in which a high-achieving offline reader appears to be a low-achieving online reader and vice versa. In other words, we can no longer assume that a standardized assessment of a student's offline reading comprehension ability will adequately measure important skills that influence online reading performance.
So, how might teachers determine which students are proficient in online reading and which students require more support? One suggestion is to incorporate curriculum-based measures of online reading ability into classroom assessment practices. These measures, called online reading comprehension assessments (or ORCAs) are more than compilations of static reading passages and multiple-choice questions transferred into a Web-based environment. A curriculum-based ORCA is designed to capture "real-time" online reading skills and strategies.
My colleagues and I piloted a series of six ORCAs with hundreds of U.S. 7th graders in language arts and science classrooms. We found these assessments collected valid, reliable scores of online reading comprehension performance (Coiro, Castek, Henry, & Malloy, 2007).
What does a curriculum-based online reading comprehension assessment look like? Generally, an ORCA engages individual students in a series of three to four related information requests posted in an online quiz interface. Students toggle between the online quiz and the open Internet, where they locate, critically evaluate, and synthesize the requested information or share ideas using such tools as e-mail, blogs, or wikis.
Figure 1 shows a series of online reading tasks integrated into "ORCAIditarod," an online reading comprehension assessment that might be used during a middle-school unit on the Iditarod sled dog races held in Alaska. You can explore the online version of this assessment at www.surveymonkey.com/s.aspx?sm=nA_2bGnBO2W8OlNmaQQWo0sA_3d_3d. Figure 1. Screenshot of Directions for ORCA-Iditarod
Teachers can use software like Camtasia (www.techsmith.com/camtasia.asp) or I Show U (http://store.shinywhitebox.com/home/home.html) to create a video recording of students' actions and voices while they complete the ORCA, just as if the teacher were watching over their shoulders. An online tool such as Quia (www.quia.com) or Survey Monkey (www.surveymonkey.com) can automatically compile student responses. Teachers can play back the video recordings to better understand how students accomplish or struggle with online reading comprehension tasks. To develop this kind of assessment, teachers should begin with a curriculum-based unit of study, such as homelessness or human body systems. Construct short challenges within the online quiz interface that direct students to locate, evaluate, synthesize, and communicate information online (for example, "Use the Internet to locate the record time for the Iditarod dog sled race and who set it. Report your answer, tell where you found it, and explain how you know the information is accurate").
To complete this task, students must (1) locate relevant information using a search engine, (2) verify information with at least one other source, (3) efficiently communicate electronic Web addresses so the receiver can quickly return to the appropriate location, and (4) critically evaluate the information's accuracy.
Other fruitful tasks are to explore a Web site to determine the author's purpose and how that purpose might influence the site's claims. Asking questions like, Does the site provide factual information? or Does the site try to sell you something? helps students gauge author intent. (See www.ascd.org/ASCD/pdf/journals/ed_lead/el200903_coiro_author_intent.pdf for a sample activity investigating a Web site's purpose.) You might have students read and respond to posts representing multiple viewpoints on a simulated online discussion board. Difference 2. Dispositions toward the Internet affect online reading abilities.
Positive attitudes about reading on the Internet are key to learning in a digital age. Certain attitudes, self-judgments, and beliefs about the Internet are positively related to effective strategy use when reading challenging online texts. For instance, higher-performing online readers display persistence, flexibility, a healthy sense of skepticism, and confidence as they navigate rapidly changing Internet texts. Lower-performing online readers give up easily and are less open to alternative strategies, less apt to question information they encounter, and less confident in their ability to use the Internet without help (Coiro, 2008; Tsai & Tsai, 2003).
Web 2.0 technologies (such as open-source and social networking sites) and emerging learning standards demand that online readers be personally productive, socially responsible, and able to collaborate with diverse team members both face-to-face and online (American Association of School Librarians, 2007; Partnership for 21st Century Skills, 2007). Accomplished Internet readers are expected to not only gain new knowledge from their reading, but also confidently generate and share knowledge with other members of a globally networked community.
To better understand students' instructional needs in this area, consider having students complete a short survey of their online reading dispositions at various points in the year. Survey items might ask students to rate the value of the Internet for research or its potential—relative to printed information sources—to pique their interest in reading tasks. (See www.ascd.org/ASCD/pdf/journals/ed_lead/el200903_coiro_survey.pdf for a sample survey of online reading dispositions.) Ask students to elaborate on circumstances under which they view themselves as capable Internet readers, as opposed to circumstances that cause anxiety or frustration. By analyzing responses to these surveys, teachers can identify students who might benefit from guided online reading experiences that will build their confidence or increase their capacity to work collaboratively within electronic communities. For example, a teacher might explicitly show a student who lacks confidence in judging Web site authors' expertise how to locate the "About Us" button on Web sites and scan the information provided for relevant details about each author's past work experiences. Later, the teacher could designate this student as one of the class experts in online critical evaluation skills and encourage classmates having similar difficulties to seek help from the student. Over time, taking on an "expert" role will foster the student's self-efficacy as a competent online reader.
Difference 3. Students often seek answers on the Internet collaboratively.
Anyone who watches a group of students engaged in online research will notice that they often work collaboratively or seek help from others online. Adolescents, in particular, might use instant messaging to quickly share or solicit a Web site address or post their question on a blog to learn what others think about the issue before composing a response.
Unfortunately, students' skills at collaborative online inquiry are rarely captured with traditional assessments that evaluate reading performance individually and without online assistance. Teachers need new assessments that capture such 21st-century abilities as strong interpersonal communication skills, an understanding of what kind of team dynamics foster high-quality outcomes, an appreciation of differences in cultural practices and work patterns, and the ability to respond appropriately to peer feedback (Afflerbach, 2007; Partnership for 21st Century Skills, 2007).
Although there are few existing models to guide future efforts in this area, schools should at least begin to consider alternative measures that evaluate group collaboration and productivity and readers' ability to seek help from a globally networked community. School leaders might benefit, for example, from discussing the theoretical and practical issues involved in designing, using, and interpreting scores on assessments of group collaboration (see Webb, 1995). In addition, members of New Zealand's University of Teaching Development Centre (2004) provide a useful list of critical questions and guidelines to consider before finalizing a program assessing student group work.
Among skilled online readers, a typical product from a reading session includes a synthesis of relevant and reliable information gleaned from two or three Web sites as well as a set of Web site addresses (URLs) that accurately refer to the information's sources—in other words, a trail of effective processes. However, the process trails of students who have difficulty reading online are more often an uninformative sentence like, "I couldn't find anything about that."
A quick review of these online recordings, for instance, highlights the fact that many adolescents do not actually use a search engine or type in keywords to launch an online query. Instead, they use a ".com strategy"; they type a whole question or phrase into the address bar at the top of an Internet browser, add ".com" to the end, and hope for the best. Similarly, process data reveals three disturbing trends: (1) many students don't look down the page of search engine results; they just click on the first link; (2) although students sometimes attempt to locate information about a Web site's authors to evaluate their level of authority, they often give up when they can't find such information easily; and (3) some students—apparently unaware of simple copy/paste strategies for transferring Web site addresses from one location to another—retype lengthy URLs letter by letter, which often leads to mistakes.
When teachers spot such processing errors, they get key information that helps them better understand which online reading skills and strategies their students struggle with. This information provides a specific reference point for where in the online reading process a group of students, or one reader in particular, needs the most support. In an age of data-informed instruction, we do a disservice to our students by not using readily available technologies to help determine how we can best prepare them for the challenges of Internet reading.
Difference 5. The nature of reading comprehension is changing because of digital technology.
The ultimate challenge in assessing online reading comprehension is that online texts, tools, and reading contexts will continue to change rapidly as new technologies emerge. Until recently, definitions of reading comprehension were grounded in at least 20 years of theory and research that informed educators' thinking about how to measure reading comprehension. Although new comprehension theories and practices have certainly emerged over the years, few have altered the nature of literacy as quickly as the Internet and other digital communication technologies are doing. To help students realize their potential as citizens in a digital age, we need to continually reconsider and expand what it means to be a skilled online reader. Subsequent revision and reconfiguration of online reading comprehension measures will need to realistically keep pace.
Obviously, changes in online texts and their associated reading comprehension practices will make it extremely difficult to establish the reliability of scores over time or the validity of scores from one online reading context to the next. But rapidly changing technological innovations will make it easier for teachers to collect, score, and interpret data in practical ways that inform classroom instruction. For example, a computer-based assessment program might soon be able to process electronic scores from an ORCA to generate graphical maps showing how a student's performance in each dimension of online reading comprehension evolves over the year.
So, how should educators move forward with attempts to measure online reading comprehension in a climate of constant change? One idea might be to consider new types of adaptive assessment designs that enable teachers to easily revise portions of assessments of online reading like the ORCA described here rather than design an entirely new measure. A second strategy might be to encourage policymakers and measurement specialists to grapple more deeply with issues of reliable and valid assessments of reading in a digital age.
But, while we're waiting for test designers and policymakers to pay attention, I recommend that we accept the inevitability of change and think more creatively about how to measure literacy and learning with online reading as part of the picture. Yes, this type of thinking is difficult. But as teachers tackle these new challenges, we should model the kind of flexible, collaborative problem solving we hope students will adopt to help them tackle a rapidly changing digital world.