0:00

0:00


AI in Education 2025: Brookings Framework for Students

📌 Key Takeaways

  • Risks Currently Outweigh Benefits: Brookings research across 50 countries finds that AI risks in education currently overshadow benefits because they undermine children’s foundational development.
  • Three Pillars for Action: The Prosper, Prepare, Protect framework provides 12 actionable recommendations for governments, educators, families, and technology companies.
  • AI Literacy Is Essential: Students, teachers, parents, and education leaders all need holistic AI literacy to navigate the rapidly evolving technology landscape responsibly.
  • Human Agency Must Come First: Effective AI in education requires tools that teach rather than tell, preserving student autonomy and critical thinking skills.
  • Collaborative Action Required: Over 500 experts across 50 countries agree that preventing AI-related harm to students requires coordinated effort from all stakeholders within the next three years.

AI in Education 2025: Why a New Direction Is Urgently Needed

Since the debut of ChatGPT in late 2022, the education community has been engaged in an intense debate about the promises and perils of generative artificial intelligence. Schools, universities, and education ministries worldwide have scrambled to develop policies for a technology that is evolving faster than any regulatory framework can keep pace with. The Brookings Institution’s Center for Universal Education recognized that waiting a decade to conduct a postmortem on AI’s impact on education would be too late — and instead embarked on a yearlong global study to understand the risks and opportunities in real time.

The urgency of this work cannot be overstated. AI in education 2025 is not a theoretical discussion — it is a lived reality for hundreds of millions of students worldwide who interact with AI tools daily, often without guidance or guardrails. From AI-powered tutoring apps to essay-generating chatbots, these technologies are reshaping how students learn, think, and develop. The question is no longer whether AI will transform education, but whether that transformation will enrich or diminish the learning experience. As organizations explore how AI transforms workforce development, the foundation starts in the classroom.

The Brookings Global Study: Methodology and Key Findings

The Brookings Institution’s research represents one of the most comprehensive global studies on AI in education to date. The methodology combined interviews, focus groups, and consultations with over 500 students, teachers, parents, education leaders, and technologists across 50 countries. Researchers conducted a close review of over 400 academic studies and convened a Delphi panel — a structured forecasting method that draws on the collective expertise of diverse specialists to identify emerging trends and risks.

The central finding is sobering: at this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits. This conclusion is not a blanket rejection of AI in education — rather, it reflects the current state of implementation, where risks and benefits differ fundamentally in nature. The risks of AI undermine children’s foundational development — their capacity to learn independently, their social and emotional well-being, their relationships with teachers and peers, and their safety and privacy. These are not marginal concerns; they strike at the core of what education is meant to achieve.

Crucially, Brookings researchers found that these foundational risks may actually prevent the benefits of AI from being realized. If AI tools erode the cognitive and social foundations that make learning possible, then even the most sophisticated educational AI applications will fail to deliver on their promise. This insight reframes the entire debate: the goal is not simply to maximize AI’s benefits while minimizing risks, but to ensure that the pursuit of benefits does not undermine the prerequisites for learning itself.

How AI Enriches Student Learning When Done Right

Despite the cautionary findings, the Brookings study is clear that AI has genuine potential to enrich student learning — when deployed thoughtfully within a pedagogically sound framework. Well-designed AI tools and platforms can offer students personalized learning experiences that adapt to their individual pace, level, and learning style. AI-powered adaptive tutoring systems can identify knowledge gaps in real time and provide targeted practice, something that would be impossible for a single teacher managing a classroom of thirty students.

AI can also enhance educational accessibility by providing real-time translation, text-to-speech capabilities, and adaptive interfaces for students with disabilities. For students in underserved communities, AI-powered educational platforms can provide access to high-quality instructional content that might otherwise be unavailable. Language learning applications powered by AI can offer immersive practice opportunities with natural conversation simulation, helping students develop fluency in ways that traditional classroom instruction alone cannot match.

The key distinction the Brookings researchers emphasize is that AI tools should teach, not tell. This means AI applications should guide students through learning processes — asking probing questions, providing scaffolded hints, and encouraging reflection — rather than simply providing answers. When AI serves as a thinking partner rather than an answer machine, it can genuinely enhance the learning experience while preserving student autonomy and critical thinking skills. The difference between AI that enriches and AI that diminishes learning lies not in the technology itself but in how it is designed, deployed, and governed.

Transform complex education research into interactive experiences that engage every stakeholder — from teachers to policymakers.

Try It Free →

AI-Diminished Learning: Risks to Student Development

The Brookings study identifies four critical areas where overreliance on AI tools can harm students. First, AI can diminish students’ capacity to learn independently. When students routinely use AI to generate essays, solve problems, or summarize texts, they bypass the cognitive processes — struggling with ideas, making connections, correcting errors — that are essential for deep learning. Over time, this creates a dependency that weakens the very skills education is meant to develop.

Second, excessive AI use threatens students’ social and emotional well-being. The engagement patterns designed into many AI platforms — constant feedback loops, gamification, and personalized content streams — can foster addictive usage patterns that displace physical activity, face-to-face social interaction, and the unstructured play that is crucial for child development. The UNICEF Policy Guidance on AI for Children has highlighted similar concerns about the intersection of AI and children’s well-being.

Third, AI can erode the trusting relationships between students and their teachers and peers — relationships that research consistently identifies as among the most important factors in educational success. When AI mediates too many learning interactions, the human connection that motivates students, provides emotional support, and models critical thinking can be diminished. Fourth, student safety and privacy are at risk when educational AI platforms collect vast amounts of behavioral data, learning patterns, and personal information, often without adequate safeguards or transparent data governance practices.

Prosper: Shifting Educational Experiences for AI in Education

The first pillar of the Brookings framework — Prosper — focuses on ensuring that AI enhances rather than replaces the educational experiences that matter most for student development. This requires a fundamental shift in how schools and education systems approach AI integration, moving away from technology-driven adoption toward pedagogy-driven design.

Under the Prosper pillar, Brookings recommends that educational institutions shift classroom experiences to emphasize the uniquely human skills that AI cannot replicate: creativity, critical thinking, collaboration, empathy, and ethical reasoning. Rather than competing with AI on tasks like information retrieval and text generation — a race that students will inevitably lose — schools should focus on developing the capacities that make humans uniquely valuable in an AI-augmented world.

A critical recommendation under this pillar is to co-create educational AI tools with educators, students, parents, and communities. The most effective educational technology is developed not in isolation by tech companies but through genuine collaboration with the people who will use it. This co-creation process ensures that AI tools are designed around learning objectives rather than engagement metrics, and that they reflect the diverse cultural contexts and educational needs of different communities around the world. For institutions exploring how to leverage education technology effectively, the Prosper framework provides essential guidance.

Prepare: Building AI Literacy Across the Education Ecosystem

The second pillar — Prepare — addresses what may be the most significant gap in the current AI-in-education landscape: the widespread lack of AI literacy among students, teachers, parents, and education leaders. Without a shared understanding of how AI works, what it can and cannot do, and how to use it responsibly, all stakeholders are making decisions in the dark.

For students, AI literacy means more than learning to code or understanding machine learning algorithms. It means developing the critical thinking skills to evaluate AI-generated content, recognize bias in AI systems, understand the data practices of AI platforms, and make informed decisions about when to use AI tools and when to rely on their own capabilities. The UNESCO AI and Education guidance similarly emphasizes the importance of comprehensive AI literacy that goes beyond technical skills.

For teachers, preparation is equally critical. Brookings recommends preparing teachers to teach both with and through AI — using AI as a tool to enhance their instruction while also teaching students how to navigate an AI-rich world. This requires significant investment in professional development programs that help teachers understand AI capabilities and limitations, integrate AI tools effectively into their pedagogical practice, and model responsible AI use for their students. Currently, most teacher training programs include little to no AI-specific content, leaving educators to figure out AI integration through trial and error while managing classrooms full of students who are already using these tools daily.

Make your education policy documents and research findings interactive — engage stakeholders with experiences they will actually read.

Get Started →

Protect: Regulatory Frameworks and Student Safety

The third pillar — Protect — addresses the urgent need for comprehensive regulatory frameworks that safeguard students while allowing responsible AI innovation in education. Currently, the regulatory landscape for educational AI is fragmented, with most countries lacking specific legislation governing how AI can be used in educational settings. This regulatory vacuum means that students’ data, privacy, and developmental well-being are largely dependent on the self-regulation of technology companies — an approach that has proven insufficient in other domains.

Brookings calls for governments to establish comprehensive regulatory frameworks for educational AI that address data privacy, algorithmic transparency, content safety, and age-appropriate design. These frameworks should require educational AI platforms to undergo rigorous testing and certification before deployment in schools, similar to the safety standards required for pharmaceuticals or children’s toys. Procurement processes should prioritize technology that protects students’ privacy, safety, and security, creating market incentives for companies to build these protections into their products from the ground up.

A particularly important recommendation under the Protect pillar is to break the engagement addiction that characterizes many AI-powered platforms. Technology companies should design platforms centered around positive mental health for children and youth, moving away from the attention-maximizing design patterns that have caused documented harm on social media platforms. This requires both regulatory pressure and a fundamental shift in how educational technology companies define success — measuring learning outcomes rather than time-on-platform or engagement metrics.

Closing the AI Divide in Education Systems Worldwide

One of the most concerning dimensions of AI in education 2025 is the growing divide between well-resourced educational systems that can invest in high-quality AI integration and underserved communities that risk being left further behind. This AI divide threatens to exacerbate existing educational inequalities on a global scale, creating a two-tier system where affluent students benefit from AI-enhanced learning while disadvantaged students either lack access entirely or are exposed to lower-quality, less carefully designed AI tools.

Brookings recommends employing innovative financing strategies to close this AI divide. This includes public-private partnerships that fund AI infrastructure in underserved schools, development aid targeted at educational technology capacity building, and tax incentives for companies that provide equitable access to educational AI platforms. International organizations including the World Bank and regional development banks should prioritize educational AI equity in their lending and grant programs.

Closing the AI divide also requires attention to the cultural and linguistic dimensions of educational AI. Most AI tools are developed primarily for English-speaking markets, leaving students who speak other languages with inferior experiences. Ensuring that AI-powered educational tools are available in diverse languages, reflect diverse cultural contexts, and are designed with input from diverse communities is essential for preventing AI from becoming another vector of educational inequality.

What Parents and Families Need to Know About AI in Education

The Brookings study recognizes that parents and families play a crucial role in shaping how children interact with AI, particularly outside the school environment. Many parents feel overwhelmed by the pace of AI development and uncertain about how to guide their children’s use of these technologies. The research recommends supporting families to manage children’s AI use at home through accessible educational resources, community workshops, and school-family partnerships that provide practical guidance.

Parents should understand that AI tools vary enormously in quality and safety. Some educational AI applications are rigorously designed with child development principles in mind, while others prioritize engagement over learning and may collect excessive personal data. Families should look for AI tools that are transparent about their data practices, designed with age-appropriate safeguards, and focused on supporting rather than replacing the learning process. Setting boundaries around AI use — including screen-free times, AI-free homework assignments, and regular conversations about what children are learning from and about AI — helps maintain the balance between leveraging AI’s benefits and protecting against its risks.

Importantly, parents themselves need to develop basic AI literacy. Understanding how generative AI works, recognizing the limitations of AI-generated content, and being aware of the privacy implications of educational technology platforms enables parents to make informed decisions and have meaningful conversations with their children about responsible AI use. Schools can support this by including parents in AI literacy initiatives rather than treating them as passive recipients of technology policies.

Actionable Steps for Educators, Policymakers, and Tech Companies

The Brookings framework concludes with a call to action that is both specific and urgent. The researchers urge all relevant actors — governments, technology companies, education system leaders, families, and civil society organizations — to identify at least one recommendation to advance over the next three years. This is not a call for comprehensive reform that takes decades to implement; it is a recognition that AI in education 2025 is evolving too rapidly for inaction.

For educators, the most impactful immediate steps include: conducting research on how AI affects children’s learning and development in specific educational contexts; co-creating AI tools with diverse stakeholder communities; and shifting classroom practices to emphasize uniquely human skills. For policymakers, priorities should include establishing regulatory frameworks for educational AI, implementing procurement standards that prioritize student safety, and investing in teacher preparation programs that include comprehensive AI training.

For technology companies, the Brookings research sends a clear message: the current trajectory of educational AI development is not sustainable. Companies must move beyond engagement-driven design toward models that genuinely support student learning and development. This means investing in evidence-based design, conducting independent impact assessments, being transparent about data practices, and collaborating with educators and researchers rather than treating schools as markets to be captured. The financial incentives of the current model may favor engagement over learning, but the social costs of AI-diminished education — measured in lost potential, widened inequalities, and eroded trust — will ultimately prove far more expensive.

As we explore how organizations can better communicate global education policy insights, the Brookings framework offers a model for translating research into action. The choice between AI-enriched and AI-diminished education is ours to make — and the window for making it wisely is narrowing.

Turn education research and policy documents into interactive experiences that inspire action from stakeholders worldwide.

Start Now →

Frequently Asked Questions

What are the main risks of AI in education for students?

According to Brookings research involving over 500 experts across 50 countries, the main risks include undermining children’s foundational learning capacity, harming social and emotional well-being, eroding trusting relationships with teachers and peers, and compromising student safety and privacy. Overreliance on AI tools can diminish rather than enrich student learning.

What is the Brookings Prosper Prepare Protect framework for AI in education?

The Prosper Prepare Protect framework offers three pillars of action: Prosper focuses on shifting educational experiences to maximize AI benefits while centering human development. Prepare emphasizes AI literacy for students, teachers, parents, and education leaders. Protect calls for comprehensive regulatory frameworks, privacy safeguards, and ethical AI use guidelines.

How can teachers effectively integrate AI into their classrooms?

Teachers should use AI tools that teach rather than tell, meaning tools that guide students through learning processes rather than providing direct answers. Effective integration requires professional development in AI literacy, co-creation of educational AI tools with educators and communities, and embedding AI within pedagogically sound approaches rather than using it as a standalone replacement for instruction.

What role should governments play in regulating AI in education?

Governments should establish comprehensive regulatory frameworks for educational AI, procure technology that protects student privacy and safety, provide clear vision for ethical AI use centered on human agency, and employ innovative financing strategies to close the AI divide between well-resourced and underserved communities.

Can AI improve student learning outcomes in 2025?

AI has the potential to benefit students through personalized learning, adaptive tutoring, and enhanced educational resources when deployed as part of an overall pedagogically sound approach. However, Brookings research finds that at this point in its trajectory, the risks of utilizing generative AI in children’s education overshadow its benefits, largely because these risks undermine foundational development.

Your documents deserve to be read.

PDFs get ignored. Presentations get skipped. Reports gather dust.

Libertify transforms them into interactive experiences people actually engage with.

No credit card required · 30-second setup