As a longtime expert at the intersection of education, technology, and equity, Ken Shelton has spent years helping educators navigate a shifting landscape of new tools. In his latest book The Promises and Perils of AI in Education: Ethics and Equity Have Entered The Chat (Lanier Learning, 2024), Shelton and coauthor Dee Lanier acknowledge AI’s dual capacity to improve or harm learning. While AI can democratize knowledge, personalize learning, and bridge socioeconomic divides, they write, it can also exacerbate equity issues. The difference lies in whether schools take a responsible yet open-minded approach to integrating AI that is tailored to learners’ needs. In this interview, Shelton discusses how to recognize bias in AI tools and develop ethical frameworks and skills for integration into the classroom.
As AI becomes more embedded in education, you warn against falling into the “hype trap” of shiny new tools. How can educators approach integration thoughtfully?
With every emerging technology, there is a good side and a side where we need to be intentionally mindful. We have to approach our understanding and use from both a pragmatic and comprehensive perspective by asking the right questions and by demanding degrees of transparency and accountability, whether it be to the AI developers or those who make the decisions to implement AI systems.
As we embrace AI’s potential to revolutionize education, we must equally commit to addressing its ethical challenges, including issues of equity, bias, and student privacy. Our mission as educators is to harness AI, not as a replacement for human judgment, but as a tool to enhance our teaching, empower diverse learners, and actively dismantle systemic inequities in education.
One of AI’s pitfalls is its subjective biases, which can potentially amplify racial, gender, language, and class inequities through its algorithms. How can educators (and students) recognize and combat this bias?
Let’s say we’re going to examine different identity factors—race, gender, ethnicity, sexual orientation, and language. I ask an AI image generator to generate an image of a doctor. Prior to showing the audience the results, I ask, What is your immediate thought as to what the doctor’s going to look like? People often say, Oh, the doctor is going to be a white male with a button-down, a lab coat, and tie. The results from seven different image generators is exactly what they were thinking. AI automates a lot of what we are accustomed to seeing and thinking—it automates the status quo.
In the context of AI literacy, if an educator or student wants to use AI for any purpose, learning how to write effective prompts is invaluable. The general rule is to make a prompt clear, concise, and unambiguous—what do you specifically want the image generator to do? If you keep a prompt too generalized, bias is going to likely permeate the results.
How do you personally use AI?
It depends on the purpose, need, and problem I am looking to solve. I like using AI to synthesize large volumes of data and process complex problems to identify first steps or comprehensive plans. For example, with one school district I’m working with, I used two AI platforms to identify the cause of measurable achievement gaps in the performance of emerging multilingual learners and how resources might be better utilized.
Before engaging with AI, I needed to prepare the groundwork. I gathered and disaggregated data over a multiyear period for students scoring one or two reading levels below grade level. I also collected contextual information about their language arts instruction, including pedagogical approaches, content personalization and cultural responsiveness, support mechanisms, and assessment models. Understanding this context was essential for crafting an effective AI prompt.
With this foundation, I then uploaded data into the AI system, including numerical trends and information about existing district resources. My prompt was specific:
Take on the persona of a leader of a medium-sized school district. Use the attached files to identify potential gaps in strategies for intervention and provide recommendations for resources to connect the classroom to the home in support of addressing this gap.
Then, along with the district team, we evaluated the AI’s suggestions, asking ourselves, What are the things we can start to do now? What doesn’t apply to our context? AI served as a thought partner and idea generator, providing critical feedback and support for our planning process.
Educators widely report using AI detection tools to evaluate student work, but studies have shown these tools are more likely to incorrectly flag work by multilingual learners or Black students as “AI generated” compared to their white, monolingual peers. Are these tools more harmful than helpful?
Why does a school system implement these detectors in the first place? If it’s under the construct of academic integrity, oftentimes, that’s just a ruse for compliance, conformity, and control. You turn in your paper, and then unbeknownst to you, your teacher has run it through an AI detector, and the next thing you hear are accusations of plagiarism.
The average efficacy rate of AI detectors is about 20 percent. It’s like a hammer looking for a nail—they have to find something to at least give the appearance that they work. Far too often these platforms are not transparent on how a learner’s intellectual property is used, which may include training that model. When a student’s work is uploaded, it is added to its data sets without that person’s knowledge or consent. Many times detectors are used reactively rather than proactively addressing why a student would need or want an AI platform to do their work for them. A proactive approach is to have conversations with students. What is the ethical use of AI when we’re doing assignments?
As students face uneven access to digital tools, how is this “digital divide” affecting their preparation for an AI-driven future?
I’ve seen too many situations where the accessibility to an AI resource is denied to students under the expectation that they’re going to wait until teachers are comfortable first. Students will eventually learn how to leverage it, but it won’t be with the guidance of the educators who are responsible for their learning. Think about how many platforms, applications, or services we use have an AI component. What we’re basically saying to students is, I know that it’s the world we live in and AI use is going to continue to increase, but I’m okay with you not knowing how to use it ethically and responsibly.
This is particularly concerning because AI resources could be doing the opposite—allowing for personalization and differentiation as a way of digitally leveling elements of the playing field. One student’s family is financially resourced, so they can hire that student a tutor. Another student whose family cannot afford a tutor can have access to an education-appropriate AI system that gives them feedback on their writing. So the access divide can narrow, especially if the student who’s using the AI resource is guided in leveraging it in ways that best serve their needs.
How do educators navigate ethical considerations for AI use?
I’m watching with a degree of nausea at whose voices are being amplified around the AI conversation. To avoid missteps and biases in understanding the role of AI, educators need to look closely at the diversity of perspectives and representation of the people that they’re learning from. Also, start playing in these platforms and go down as many rabbit holes as possible. In a school system, if you’re a leader, create a focus group to provide feedback and assess the quality of how a system works, what it’s doing, the impact it’s having, and how that’s in accordance with your strategic plan or your educational goals. And start to consider, What are the critical questions we need to be asking to apply AI for good?
Editor’s note: This interview has been edited for length.