AI Education: Closing The Digital Divide And Ensuring Equitable Access

Artificial Intelligence (AI) is not new — various AI tools have been utilized in classrooms and other learning environments for years, from assessments that use adaptive learning to sophisticated spellcheck and grammar programs. However, recent advancements in AI technology, particularly the emergence of generative AI programs, like OpenAI’s ChatGPT and Google’s Gemini, and more education-specific AI products have brought about a significant shift.* These tools now offer students and educators powerful ways to generate novel text, images, and audio and technology companies are racing to embed generative AI capabilities into new and existing education software, further exacerbating this shift. Experts predict that these developments will drastically alter the global economy — according to a highly cited report from Forrester Research, generative AI may replace upwards of 2.5 million jobs in the U.S. by 2030, highlighting the need to prepare students, especially historically underserved students, for the future workplace.

The Pros and Cons of Using AI in Education

This new wave of AI is expected to transform teaching and learning by offering innovative approaches to enhance learning outcomes, sparking students’ curiosity in a digital environment, and providing teachers with tools for personalized instruction. AI tools could help to reduce racial disparities in education and promote equitable opportunities. Some new and developing AI tools, like Khan Academy’s online tutoring assistant Khanmigo, aim to tailor learning experiences to an individual student’s needs, preferences, and necessary learning supports.* Organizations are hopeful that AI tools will soon be capable of tailoring assessment questions and homework problems, adjusting their difficulty level, and offering real-time, individualized feedback for each user. They also hope it will help educators identify early signs of academic struggles or disengagement among students, enabling timely interventions.

The potential of AI to empower educators to support students more effectively is enticing, but it’s imperative that the inequities and potential risks be understood and mitigated. Widespread adoption of AI, like any transformative technology, could have unintended consequences and pose potential dangers for students — especially students of color and students from low-income backgrounds, who have long been underserved.

For starters, AI tools could inadvertently reinforce existing biases and stereotypes, if not properly designed and trained. AI tools learn from data, and if the data used to train them is biased or lacks diversity, the tools will perpetuate discrimination. There is already evidence that AI tools are prone to amplify bias, both generally and in education-specific contexts. At the administrative level, a reliance on AI in decision-making processes, such as identifying students for advanced coursework or streamlining the provision of services, could lead students of color to be unfairly penalized or shut out of opportunities if the algorithms used aren’t transparent or vetted for bias.

AI could also widen the digital divide. Students of color and students from low-income backgrounds already have inequitable access to devices and high-speed internet. An increasing reliance on AI could exacerbate resource and opportunity inequities, making AI something that only White and wealthy students can access. Students of color, who already contend with extreme racial biases and stereotypes in educational settings, will be further marginalized if these risks aren’t averted.

Mitigating the Risks

As such, it is imperative that AI systems be designed, adopted, and implemented with equity in mind.

EdTrust wants to ensure that:

  • Students of color have access to the same AI technologies as their peers without exceptions, alongside universally accessible safeguards. Ensuring fair access to AI tools requires closing the current digital divide by making sure that students from underresourced communities have access to high-speed internet and devices at home and in school.
  • Federal and state education agencies formulate policies and practices regarding the use of AI in classrooms that prioritize mitigating bias and require vendors to prove that their tools do not exacerbate inequities faced by students of color and other underserved students.
    • Guidance and regulations should be developed by AI experts with diverse backgrounds and experience, alongside students, teachers, parents and caregivers, and education leaders, especially those from underserved and diverse communities.
  • Federal and state education agencies require and fund trainings and other professional development opportunities for educators to learn to use AI technologies in ways that foster inclusive, engaging, and rigorous learning experiences.
  • District and school leaders adopt processes and employ criteria to assess whether AI technologies that would be implemented in classrooms have a clear purpose, are inclusive and proven effective in achieving that purpose, and do not discriminate based on race, ethnicity, income, gender identity, or ability.
  • AI technologies are used in ways that foster, not replace, opportunities for students to build positive relationships with adults and peers, as strong relationships provide a crucial foundation for student engagement, belonging, and learning.

What Happens Next

Not only is the widespread use of generative AI in education inevitable, but these tools are still in their infancy — companies will continue to develop more advanced products designed for school use. It is, therefore, crucial that educators, school leaders, and policymakers consciously ensure that the AI tools they adopt are inclusive and that the implementation of these tools is grounded in equity and centers the voices of students, parents, and educators of color. This means actively vetting any AI tools used in educational contexts and addressing potential inequities; ensuring all students, especially students of color, have access to new technologies; and adopting state- and district-level policies that ensure AI tools are implemented purposefully, carefully, and in a way that does not exacerbate inequities faced by students of color and those from low-income communities.

EdTrust sees this as core to our mission of improving equity in education, and we are committed to grounding our developing AI work in the assets and experiences of students, families, and communities. Urgent action is needed to understand the many ways that emerging AI technologies will affect students, especially students of color, and ensuring that the voices of those impacted are included and heard in key discussions and decisions related to AI is essential. In the coming months and years, we plan to conduct a landscape analysis; facilitate listening sessions with superintendents, equity and civil rights organizations, students, and families to capture the emerging needs of diverse communities; and convene industry experts and seasoned education leaders to discuss the challenges and opportunities with next-generation technology in education. Drawing from our listening journey, engagement, and research, we will develop education-specific guidelines and guardrails to ensure new AI tools are being designed, implemented, and analyzed through an equity lens.

*EdTrust mentions these AI technologies for illustrative purposes, this is not an endorsement.