Let's say you are a superintendent confronted with a persistently high rate of chronic absenteeism. As part of the year's strategic planning process, you want to focus on this challenge. Members of your team are making compelling arguments for a range of approaches. There's some talk of interventions that directly target attendance, like sending parents text messages about their child's attendance. Others are pitching ideas associated with a broader range of goals, like anti-bullying initiatives, parent engagement, and school-climate improvements, arguing these measures could help improve attendance as well.
Rather than relying on intuition to choose which interventions sound most promising for your goal, you reflect that these are not new ideas. Other districts have been down these roads before, and knowing how things worked out elsewhere would help guide your decision.
In other words, it seems like research should be able to help. But you don't have sufficient staff resources to slog through dozens of academic articles that may not directly answer your questions. How can you make the process of using research more efficient?
In our work training current and future education leaders, we draw on our own background as researchers to suggest a two-step process to funnel down to the research you need. Step one is turning your problems of practice into specific research questions so you can better search for relevant results. The second step is to look for answers in syntheses of research rather than individual studies. The findings of any one study will not tell you much about the general gist of the literature; it's more efficient to look for pieces that summarize across the existing literature base.
Asking narrower questions and looking for syntheses of findings doesn't require extensive training or tons of time; it just takes a little practice and a bit of familiarity with where to look. Anyone in your district with an interest in using research—or even better, multiple people with different perspectives—can do this work. Let's take a closer look at this process.
To find good research for your problem of practice, you must first translate it into the kinds of questions that researchers ask. This step is necessary because of a key disjuncture between research and practice: Practitioners solve problems, while researchers answer questions. Further, the problems practitioners solve are thorny and complex, often rooted in specific educational, historical, cultural, and social dynamics, and rarely ones that a single piece of research will reveal how to solve. Meanwhile, researchers structure their work to answer specific questions—the simpler and narrower, the better.
Asking the Right Questions
Getting used to asking questions in a way that researchers can answer them will make the process of using research much easier, because you will more quickly home in on the studies that are most relevant to the specific problem you are trying to solve.
We suggest brainstorming research questions in three main categories: questions about diagnosis, impact, and implementation.
Questions about diagnosis investigate why the problem exists: in the above example, why chronic absenteeism is so high in the first place. They help you ensure that you are solving the right problem by determining its root causes. If answered well, these questions generally point to multiple factors rather than any single silver-bullet solution. This is a critical stage in planning and improvement—and one where both existing research and your own data can help.
Research on trends and patterns in chronic absenteeism, for example, highlights the importance of factors such as safe transportation to school, school climate and culture, and student health in driving absenteeism rates. Even if you didn't have time to look at that prior research, you could still generate a list of possible diagnoses (hypotheses) for why students might be absent frequently. You can then investigate those issues with your own data to determine which are the most relevant and pressing in your context. For example, you could pose questions like, "Are the students with higher absence rates more likely to take the bus or walk? Are they experiencing bullying or other negative interactions at school? Do they visit the school nurse frequently?" We recommend that you ask questions to focus on differences in access to resources and opportunities—such as safe transportation to school or a positive school climate—rather than fixed characteristics such as students' family incomes or racial and ethnic identities. While inequities related to income and race may be among the root causes of your problem, they aren't issues that your school district can do much about in the short run. Instead, prioritize gathering information on factors under the school system's control, which you could potentially influence or change.
A more common type of question educators ask is about impact: whether a particular strategy "works" or what its effects are. The trick for connecting these questions to the research base is to frame them precisely. You need to ask about the impact of what specific strategy or intervention (What do we mean by parent engagement?) on which outcomes (Attendance rates or chronic absenteeism?), under what conditions and contexts (Kindergarten or high school? In person or virtual?), and relative to what (Business as usual? Access to some related intervention? Something else—ideally your best alternative?). Getting specific with the question before you look for research gets you ready to interpret what you find later, understanding what is most relevant for your situation.
Equally important, but often neglected, are questions about the implementation of a strategy. What resources are required: staffing, time, space, training, scheduling, and so on? What else needs to be in place for the strategy to succeed—what are all the assumptions that must hold for this strategy to work as you anticipate? Implementation requirements (think cost, staffing, scheduling) often rule out some strategies as infeasible, so it is useful to gather these details earlier in your research process.
"How Do We Know?"
The questions you generate about diagnosis, impact, and implementation may still be pretty broad, potentially turning up many off-topic results. To break these big questions into smaller ones, try asking "How do we know?" This helps identify the assertions and assumptions embedded in your reasoning, so that you can be sure your whole line of thinking is based on facts rather than assumptions.
Here are some ways you might ask "How do we know?" about the idea to use a school climate improvement initiative to reduce absenteeism.
How do we know that our school's climate is strong or weak? (What data do we have to support this claim?)
How do we know that weak school climate is the right diagnosis for our problem? (Do we see that schools in our district with weaker climates also have higher absenteeism rates? Do other factors seem to have a stronger correlation to absenteeism?)
How do we know that improving school climate will reduce absenteeism?
How do we know what strategies work best for improving climate in our context? (And how will we know they caused the climate to improve, rather than just being common strategies in schools that would have had positive climates anyway?)
How do we know we would be able to implement those strategies well? (What are all the resources involved? Do we have them?)
Many types of materials, including academic research, marketing, media, or advocacy, claim to be "evidence-based." Asking these types of questions ensures that you won't be swayed by a single study that doesn't represent the typical findings on the topic, or by a summary identifying factors that that may be irrelevant to your circumstances.
Find the Answers Already Out There
After switching from a broad topic or problem to a series of specific questions, you'll generally find it more straightforward to match up your needs with the existing research base. But you'll still need to look for research efficiently. In most cases, we don't recommend reading original research articles—or education journalism that is about just one study. Not only is it too time-consuming to fit in with the workflow of school and district leaders, but this approach only steers you toward one or two trees when there's a whole forest out there (Gordon & Conaway, 2020).
So, how can you quickly scan the forest?
The research resources that are the most helpful to school leaders generally fall into a few different categories:
Systematic reviews. The "systematic" here refers to how research is selected for inclusion in the review. These reviews include all studies that meet certain criteria, from exhaustive searches, as opposed to highlighting some studies and excluding others in an ad hoc fashion. The criteria for selection are reported transparently: for example, which publication dates, keywords, sample sizes, or methodologies were included, which databases were searched, and how many studies were identified that met those criteria. The transparent and comprehensive nature of these reviews means they are great—when you can find them. But since they are so time-consuming to produce, you often can't. One excellent source for systematic reviews is the Campbell Collaboration.
Research syntheses. You can find these types of overviews in policy briefs, advocacy reports, and education journalism. Most pieces describe what "research says" and cite multiple studies, though they are often not systematic in the ways described above. They are generally still useful, because they are written by experts who have spent years developing their knowledge and professional judgment on the topic. Nonetheless, you should always keep in mind who is writing and publishing the piece, and if their interests are independent of the research findings. The What Works Clearinghouse Practice Guides are independent research syntheses, conducted by researchers with no conflicts of interest.
Commentary or "how-to" pieces. You can find these pieces in traditional or online media sources, from professional associations, and in the amorphous world of social media. They are often written in an approachable style and easy to find, which makes them great sources to turn to. As the lines between reporting and opinion blur, however, it can be hard to judge when a piece has an agenda. When authors assert how things are, or what you should do in a particular situation, it often sounds like it could be based on research. If research isn't explicitly cited to back up statements, assume the author is sharing a point of view, rather than a research consensus. You can also check the author's bio and credentials to see what expertise they have in the field.
The Value of Practical Significance
The final challenge is uncovering the parts of the research that are most useful for your needs. Unfortunately, those issues aren't often what researchers prioritize in their write-ups. Most research will lead with whether findings are "statistically significant" or what their "effect size" is. Statistical significance is a technical criterion that measures whether the estimated impact is likely to be "real" or just due to chance. Effect size is a standardized measure of the strength of the relationship between two factors. Though these indicators are often used to describe how authoritative research findings are, they tell you nothing about how the research should feed into your own thinking and decisions.
To make research useful, instead focus on the practical significance of research findings (or, as we call it, the importance). Practical significance won't be reported in any paper or report. It requires a judgment call, specific to your own situation. Is the impact big enough to matter educationally? Is the strategy feasible to implement, and the best option for you given the resources it would require (Kraft, 2020)?
Figuring out how relevant research is to your context is an important and challenging step. If you are thinking narrowly about what "similar" districts or schools look like, you will rule out irrelevant research, but you might also be overly restrictive. Every district is special—but not as special as you may think. Rather than thinking about what is or is not close enough to "count," focus on which aspects matter for the problem at hand. For example, if you are the principal of an elementary school seeking to reduce chronic absenteeism, a study targeting high school students may have little to offer, but you might be less concerned about the difference in grade span if you are trying to design a teacher-mentoring program.
Putting the Questions Before the Answers
The world of research has much knowledge to benefit practitioners, but it can be hard for practitioners to know where and how to sift through the research base to find the most relevant information for their needs. By translating problems of practice into the narrower types of questions that research can answer, educators can locate relevant research faster—as well as more effectively diagnose the causes of the challenges they face. Research will never answer all the questions education leaders may have, and ultimately it's their job to make a decision even in the absence of complete information. But learning efficiently from those who have studied similar ideas before can help educators more quickly home in on the strategies that will benefit their students the most.
How to Read a Research Paper: The R3I Method
To quickly find the information you want and need in a research paper, we recommend the R3I Method—reading for relevance, inference, impact, and importance.
Relevance. Is the paper relevant to the problem you are trying to solve? Is this intervention aimed at an outcome you want to change? If so, is the intervention something you might actually try? How and why did the intervention work, and are those same conditions in place in your context?
Inference. Does the study's methodology support the inference it is trying to make? Often research aims to infer that an intervention caused an outcome. To measure causation, not just correlation, the study must account for the fact that people who participate in an intervention are likely to be different than those who do not, especially when they choose whether to participate. Randomly assigning some people to participate and others to serve as a comparison group is the best way to handle this problem, so as a non-research expert, look for words like experiment, quasi-experiment, randomized controlled trial, or random assignment to be sure the researcher addressed this issue.
Impact. Is the impact positive or negative, and how big is it? In particular, is the impact big enough to matter educationally, and is it worth the resources it would require to implement?
Importance. Are the findings statistically and practically important?
The following chart summarizes where you're likely to find R3I in an academic paper.
Where to Look for R3I in a Research Paper
Paper Section
Should you read it?
What will you find?
Abstract
Yes!
Most or all of R3I
Introduction
Maybe
Most or all of R3I
Background
Maybe
Relevance
Intervention or context
Yes!
Relevance
Data or sample
Maybe
Relevance
Methods
Probably not
Relevance
Results
Maybe
Impact, Importance
Discussion
Yes!
Most or all of R3I
Source: Adapted from the blog post "How to Read a Research Paper: The R3I Method" by Carrie Conaway, which appeared on the Rethinking Research for Schools website. Used with permission.
Reflect & Discuss
➛ Where do you most often hit a snag when searching for research to help with problems of practice? Why do you think this is?
➛ In what ways could the questioning strategies outlined in this piece help you in researching issues in your school?
➛ Where do you go to read relevant research to help you with your school's challenges? Do you think your sources are sufficient?
References
•
Gordon, N. (2021, January). One study is enough to be dangerous. School Administrator, 12–13.
•
Gordon, N., & Conaway, C. (2020, July 13). How districts can learn from their COVID response: Stats 101 not required. Phi Delta Kappan, online.
•
Kraft, M. (2020). Interpreting effect sizes of education interventions. Educational Researcher, 49(4), 241–253.
End Notes
•
1 In our book, Common-Sense Evidence: The Education Leader's Guide to Using Data and Research (Harvard Education Press, 2020), we go into more depth on answering the questions once you've posed them, both with existing research and your own data.