By Aisha Naz Ansari, PhD student, School of Education, Durham University, United Kingdom Research on education has long faced criticism because its knowledge production often remains fragmented, slow, and inaccessible to those who could use it most, including teachers, school leaders and policymakers. As Generative AI (GenAI) tools such as ChatGPT, Gemini, and Copilot reshape how knowledge is created and shared, it is timely to ask: Can GenAI help educational […] The post Will Generative AI transform research on education? appeared first on World Education Blog.
By Aisha Naz Ansari, PhD student, School of Education, Durham University, United Kingdom
Research on education has long faced criticism because its knowledge production often remains fragmented, slow, and inaccessible to those who could use it most, including teachers, school leaders and policymakers. As Generative AI (GenAI) tools such as ChatGPT, Gemini, and Copilot reshape how knowledge is created and shared, it is timely to ask: Can GenAI help educational research overcome these challenges, or will it reproduce and intensify them?
The enduring limitations of educational research
Research on education has been repeatedly critiqued for producing siloed knowledge. Critics state that studies often remain confined within disciplinary or contextual boundaries, offering limited cross-sectoral insight and generalizability. While small-scale, context-rich studies have their value, they rarely translate into evidence that shapes policy or informs teachers’ daily practices.
Additionally, the long-standing quantitative-qualitative divide constrains innovation and limits methodological pluralism. Research also suffers from cognitive and linguistic biases such as the dominance of English-language scholarship, which marginalises perspectives from the Global South.
Slow knowledge cycles hinder impact. Lengthy production timelines and manual data analysis delay responses to urgent educational challenges. Meanwhile, weak data-sharing cultures and subjective interpretation raise concerns about reproducibility and transparency.
The rapid rise of GenAI in research practice
This blog argues that education research could be potentially transformed by GenAI, whose uptake has been swift and widespread. Recent estimates suggest that between 115 and 180 million people now use GenAI tools daily. Within higher education, 88% of students are reported to use GenAI, especially ChatGPT, for academic tasks, up from 53% the previous year. There are several factors driving researchers’ growing use of Generative AI.
First, the rapid diversification of GenAI tools, from ChatGPT and Gemini to Copilot and domain-specific platforms, has significantly expanded the ways in which researchers engage with knowledge. These tools now support a wide range of research activities, including literature mapping, data analysis, writing and peer–review preparation. Evidence suggests that GenAI applications are increasingly used to assist with evidence synthesis and academic writing, primarily to enhance efficiency and save time.
Second, GenAI is becoming embedded across the entire research cycle. It supports idea generation, scoping reviews, methodological design, coding and analysis, and dissemination through automated summaries and multilingual translations. Early career researchers are consistently using GenAI as a research accelerator to facilitate diverse tasks, from hypothesis generation to drafting full manuscripts. In response, higher education institutions have been compelled to develop and enforce GenAI guidelines to ensure its ethical and transparent use in scholarly work.
Third, GenAI promises efficiency, inclusivity and creativity. It accelerates writing and analysis, enhances engagement with non-English sources, and enables novel forms of inquiry such as simulated interviews and synthetic datasets. However, these shortcuts are accompanied by significant ethical and epistemic tensions, including concerns related to authorship, bias, and the integrity of machine-generated knowledge. Evidence indicates that GenAI tools can support multilingual and interdisciplinary research and can substantially enhance creative performance when used collaboratively with humans, although they may also reduce the diversity of ideas.
Last, GenAI signals the emergence of algorithmic scholarship, a hybrid form of knowledge production in which human interpretation and machine generation co-produce meaning. This mode of scholarship challenges traditional notions of expertise and originality, prompting researchers to reconsider their roles not merely as authors, but as curators, collaborators and critical evaluators of AI-generated outputs. Arguably, GenAI disrupts conventional authorship models and necessitates new frameworks for understanding scholarly integrity, authorship, and meaning-making in the digital age.
The double-edged nature of GenAI
Despite its promise, however, GenAI presents profound risks. It can dehumanize research, replacing deep contextual interpretation with automated pattern recognition. It can amplify systemic bias, as its training data reflect dominant, often Western and corporate, epistemologies. It may erode human connection, prioritising prediction and text generation over interpretation and dialogue. Ultimately, it risks lowering academic standards, substituting critical thinking with prompt engineering and genuine discovery with algorithmic recombination. Hence, the question is not whether GenAI should be used in educational research, but how it can be used responsibly, critically and reflexively.
A framework for critical use of GenAI
To guide such responsible integration, I propose assessing GenAI-generated outputs through four criteria:
- Epistemic: Evaluate not just the fluency of AI outputs but their depth and coherence. Claims should be judged in relation to evidence, not probabilistic likelihood.
- Methodological: Document prompts, iterations and researcher oversight. Triangulate AI-generated insights with human-coded or empirical data.
- Ethical: Maintain transparency about AI’s role in authorship. Guard against hallucinated data, bias and intellectual misappropriation.
- Reflexive: Treat AI as a critical friend, a catalyst for reflection, not a replacement for human interpretation.
This framework moves evaluation beyond the question whether it is correct toward the question whether it is intellectually and ethically defensible.
The path forward: human-AI collaboration
GenAI undoubtedly has the potential to transform educational research, but only if deployed critically, equitably and reflexively. Without these safeguards, it risks accelerating the very inequities, biases and methodological weaknesses that already constrain the field.
A human-in-the-loop approach offers a constructive way forward. Researchers should remain central to interpretation, ethical oversight and contextual judgment. AI can assist, not replace, human reasoning by accelerating tasks such as literature mapping, data coding or hypothesis generation. Transparent documentation of AI contributions can further ensure accountability and scholarly integrity.
Ultimately, human-AI collaboration should position AI as a reflective partner that enhances, rather than diminishes, the intellectual and ethical fabric of educational research. When guided by human judgment and critical thinking, GenAI can support faster, richer and more inclusive forms of inquiry while preserving what truly defines our discipline, its human purpose and moral responsibility.
The post Will Generative AI transform research on education? appeared first on World Education Blog.













