ELT and AI for Good

a week ago 11

Cambridge publishes a series of short academic titles called ‘Elements’. There are dozens of them and, for a few weeks only after publication, they can be freely downloaded in pdf format. They vary in quality. Some are very useful. For others, the quality is commensurate with the price you pay. A recent addition to the […]

Cambridge publishes a series of short academic titles called ‘Elements’. There are dozens of them and, for a few weeks only after publication, they can be freely downloaded in pdf format. They vary in quality. Some are very useful. For others, the quality is commensurate with the price you pay.

A recent addition to the series is ‘Generative Artificial Intelligence and Language Teaching’ (Moorhouse & Wong, 2025). In 8 chapters and under 80 pages, the authors rattle through a general introduction to Generative AI (1), the use of Generative AI as a ‘knowledge resource and development tool’, as an assistant for lesson planning and materials production, and for use with assessment and feedback (2-4), how students use AI for language learning (5), ethical and social considerations (6), necessary AI skills (7) and professional development through AI (8). There’s little, if anything, that is new here for people who already know about and use Generative AI in language teaching. For those who don’t, there’s not enough in the way of detailed practical suggestions to make this title useful.

The book claims to promote ‘evidence-informed approaches’. It doesn’t. It is an enthusiastically crude flag-waving exercise for GenAI. Such evidence as is cited is extremely selective, and evidence that might deflate the authors’ hype is ignored. The tone of the book is, perhaps, best summarised by a little poem (in part used to demonstrate how good AI-generated poems are!) that the authors generated with AI to summarise chapter 2:

So teachers rise, embrace the aid,

Of GenAI tools carefully made

Reservations about, say, the problematic reliability of ChatGPT or the bias and stereotyping that is unavoidable in ChatGPT text are always relegated to the end of sections. Very briefly. Some of the suggestions are frankly bizarre. The authors suggest that teachers use ChatGPT to find out what pedagogical approach might be good for them: ‘Language teachers can prompt a conversational AI chatbot to generate a list of methods and approaches, provide key information about a specific approach, or create example activities and lesson plans that illustrate the implementation of the approach in a specific context.’ The example that is provided is of using the Silent Way – not perhaps the most evidence-informed approach. Then, there’s the suggestion to use GenAI to produce an exercise which involves transforming passive verbs to active ones (p.31). It’s hard to imagine what kind of evidence might have informed this. Stilted, unnatural ‘dialogues’ are praised for their authenticity and inclusion of conversational features.

The chapter on ethical and social considerations is especially shallow. The problem of standardised English (produced by GenAI) is, apparently, easily solved: ‘Educators can take proactive steps to balance the benefits of many conversational AI chatbots standardised outputs with the promotion of linguistic diversity.’ To address issues of copyright and intellectual property, teachers should explicitly teach students ‘about the importance of copyright laws and the limitations of AI generated content’. Bias is acknowledged as a problem but no solutions are offered.

When it comes to environmental costs, the suggestion is to prioritise ‘specific AI tools that might require less computational power, optimise classroom workflows to reduce unnecessary AI usage, and advocate for greener infrastructure from AI developers’. Yup, it’s as easy as that!

As for the ways that students use GenAI, AI literacy needs to be fostered because without it ‘students may either over-rely on GenAI or fail to leverage its potential in ways that truly benefit their learning’. The first example that is given of how to do this is to give students the following prompt:

I am a high school student learning English and I struggle with verb tenses and phrasal verbs. Please create a quiz with 10 sentences where I have to choose the correct verb tense (past, present, or future), and explain why the answer is correct.

Again, not a lot of evidence I’m aware of would support this approach.

Underlying the whole book is the unexamined assumption that language teaching can be made more efficient by using GenAI: ‘AI tools can enhance teaching by automating routine tasks such as grading or generating lesson plans, allowing teachers to focus more on individualised instruction and critical thinking activities’. There’s no evidence to support this claim, and no indication that there might be evidence that suggests the opposite (see, for example, Selwyn et al., 2025). The claim is probably taken from an OECD report (2023), full of the OECD’s usual techno-solutionism, and lacking any research.

What kind of person would write such a one-sided and evidence-ignored book? Well, one of the authors, B. L. Moorhouse, gave a talk earlier this year, entitled ‘AI for Good: Nurturing critical and positive uses of GenAI in language learning’. What does this choice of title, ‘AI for Good’, tell us?

AI for Good’ is a United Nations initiative, established in 2017, with a brief to identify innovative applications of AI to solve global challenges. It organises annual conferences which bring together people who are interested in its mission, and celebrates worthy projects in its ‘AI for Good Impact Awards’. These projects are often very inspiring. There’s a chatbot co-created with refugee communities to strengthen healthcare access and autonomy for refugee women. Or another that uses satellite imagery and machine learning to analyse soil health and land degradation in Ghana.

However, it is worth asking who the main beneficiary of ‘AI for Good’ might be. In the marketing of AI by Microsoft, Apple, Amazon, Alphabet, NVIDIA, Meta and all their business partners, the technology is ‘couched as a global opportunity, a promise at the feet of humankind itself, to extend into a new world space of ease, efficiency’ (Adams, R., 2025: 18). Meanwhile, criticism of AI is growing fast with talk of a new AI backlash (Rogers, 2025). Rather than being a force for good, more voices are articulating the idea that AI ‘deepens poverty, fractures community and social cohesion, and exacerbates divides between people and between groups’ (ibid.:11). Worse still, business and investment voices have started to sound warnings: Goldman Sachs has massively revised down its estimates of the extent to which AI will boost productivity (Bender & Hanna, 2025: 193). Talk of AI for Good serves to distract from thoughts of a world actually made worse by AI. Such distraction is crucial if the big AI companies are to continue to attract investment.

It’s no surprise, then, to find that the major sponsors of the ‘AI for Good’ conference are the big vendors and their associates (Microsoft, pwc, Deloitte, IBM, Shell, Amazon, Samsung, Alibaba, Huawei, Dell, Lenovo, etc.). It’s no surprise either to learn that the event is packed with demos of robots and drones, or that many of the speakers are tech executives and CEOs (Sam Altman of OpenAI was the star speaker last year). This year, one of the invited speakers, Abeba Birhane, was asked, shortly before her talk, to remove some of her slides. The offending slides included anything that mentioned Palestine, the word ‘genocide’ had to be removed, along with another slide referring to illegal data torrenting by Meta (Goudarzi, 2025). Birhane commented:

It feels like when they are claiming AI for social good, it’s only good for AI companies, good for the industry, good for authoritarian governments, and their own appearance. They pride themselves on having tens of thousands in attendance every year, on sponsorships, and on the number of apps that are built, and for me that really is not a good measure of impact. A good measure of impact is actual improvement of lives on the ground. Everywhere you look here feels like any other tech summit rather than a social good summit. So, it’s really disappointing to witness something that is supposed to stand for social good has been completely overtaken by corporate agenda and advancing and accelerating big corporations, especially their interests, rather than doing any meaningful work.

Outside this year’s summit, a group of about 100 activists protested to accuse the major tech companies of complicity in war crimes against Palestinians. The complicity of IBM, Microsoft, Google and Amazon is catalogued in the report by Francesca Albanese for the UN Human Rights Council, ‘From Economy of Occupation to Economy of Genocide’.

Time to return to the Cambridge ‘Element’. I think it is much more deeply informed by the ‘AI for Good’ tropes than it is by any evidence. It is probably driven primarily by naivety and wishful thinking, but its net effect is to contribute to the hype of GenAI and to promote the corporate agenda.

References

Adams, R. (2025) The New Empire of AI. Cambridge: Polity Press

Bender, E. M. & Hanna, A. (2025) The AI Con. London: Bodley Head

Goudarzi, S. (2025) AI for good, with caveats: How a keynote speaker was censored during an international artificial intelligence summit. Bulletin of the Atomic Scientists, July 10, 2025 https://thebulletin.org/2025/07/ai-for-good-with-caveats-how-a-keynote-speaker-was-censored-during-an-international-artificial-intelligence-summit/

Moorhouse, B.L. & Wong, K.M. (2025) Generative Artificial Intelligence and Language Teaching. Cambridge: Cambridge University Press

OECD. 2023. Generative AI in the Classroom: From Hype to Reality. Paris: Organisation for Economic Co-operationand Development. https://one.oecd.org/document/EDU/EDPC(2023)11/en/pdf.

Rogers, R. (2025) The AI Backlash Keeps Growing Stronger. Wired, 28 June 2025 https://www.wired.com/story/generative-ai-backlash/

Selwyn, N., Ljungqvist, M. & Sonesson, A. (2025) When the prompting stops: exploring teachers’ work around the educational frailties of generative AI tools. Learning, Media, and Technology, 23 July 2025 https://www.tandfonline.com/doi/full/10.1080/17439884.2025.2537959#abstract


View Entire Post

Read Entire Article