Is Any AI Use Ethical?

4 days ago 17

The past two days I had the opportunity to attend multiple sessions at the summer 2025 conference for “Civics of Technology.”  It’s a grassroots organization which: …aims to empower students and educators to critically inquire into the effects of technologies on their individual and collective lives. We conduct research, develop curriculum, and offer professional development. […]

The past two days I had the opportunity to attend multiple sessions at the summer 2025 conference for “Civics of Technology.”  It’s a grassroots organization which:

…aims to empower students and educators to critically inquire into the effects of technologies on their individual and collective lives. We conduct research, develop curriculum, and offer professional development. Our work seeks to advance democratic, ethical, and just uses of technology in schools and society.

“About Us.” Civics of Technology, https://www.civicsoftechnology.org/aboutus. Accessed 1 Aug. 2025.

Day 2 – Civics of Tech 2025 (CC BY 4.0) by Wesley Fryer

The theme of this year’s conference was “Communal Resistance to Artificial Systems,” and there were some phenomenal presentations in both the keynotes and breakout sessions. Yesterday I wrote a post sharing some of my key takeaways and new links, and I also created a Google Doc of notes from all the sessions I attended. I’ve titled this post, “Is Any AI Use Ethical?” because, in the course of the conference, I realized many of the presenters and participants in the conference are in different “ethical and pedagogical places” than I am when it comes to using AI. I shared the comments below in the Zoom chat of one of our Friday conference sessions:

It seems many of the participants and presenters in this conference are “AI is evil absolutists.” For these folks, there is no middle ground of ‘use AI to do helpful things.’ AI is viewed as a fundamentally ‘a priori’ evil and any use is unethical / immoral. I’m not on that page today, although I certainly oppose techno-facism and want to actively resist it. But it’s helpful to have my thinking challenged in many ways by this group.

I definitely oppose techno-fascism, which is a current and strong political and economic movement in the United States today in mid-2025. I setup the page, “What Is TechnoFascism” on the ResistAndHeal.com website I started earlier this year and continue maintain, along with an accompanying Substack. I acknowledge that the “techbros of Silicon Valley” are colluding with our chief executive and administration to ramp up both government and corporate surveillance, and advance our pervasive culture of “surveillance capitalism.” I am not happy about these trends, and have been concerned about the rise of the surveillance state for quite awhile. In 2015 I shared the TEDx talk in Enid, Oklahoma, “Digital Citizenship in the Surveillance State.” In that talk I attempted to sound alarm bells for the rise of the surveillance culture in the USA and our need, as educators, to raise awareness and take action… But I was less sure of the specifics of actions we should take (in 2015) beyond changing personal app privacy settings, and supporting the advocacy work of groups like the EFF.

I have been and continue to be a vocal advocate for “playing with media” as a way to gain knowledge and skills around the uses, affordances and drawbacks of different media formats and platforms. Today, I’m an advocate of “playing with AI,” and have documented many of my own AI experiments and lessons learned on ai.wesfryer.com. I am convinced AI technologies present transformative capabilities as a “cognitive force multiplier,” and have willingly drank a healthy portion of the “AI Kool-Aid” and “AI hype cycle.” I’ve said and still believe that AI represents a transformative leap forward in our shared human history of communication as well as information economy / third wave work. At this point, I am not and do not want to be an “AI conscientious objector,” refusing to voluntarily use it in all contexts. I am an advocate for its ethical and beneficial uses, and plan to remain one.

AI is a super-powerful technology and technological capability. It’s emergent, continuing to improve and change in dramatic ways. I frequently share the refrain, “This is the worst AI we’ll ever use,” and struggle to imagine what our world of ideas, work and culture will look like in 10 or 20 years if AI continues to improve at its current rate.

As I shared in that 2015 TEDx talk, however, especially as a ‘recovering school director of technology,” I’m acutely aware that the overall advance of enterprise technology tools seems to be on the side of authoritarians and those who want to exert authoritarian and fascist control over others. This is true within nation-states as well as within schools and universities. This explosion of surveillance tools and capabilities is dramatic, Orwellian, and lamentable. Today’s Civics of Tech presentation by Ian Linkletter, “Centering Student Voices in Resisting Surveillance,” was the most dramatic articulation of these dynamics I’ve read, heard or watched to date. I encourage you to watch the 90 second video, “Face Detection, Remote Testing Software & Learning At Home While Black — Amaya’s Flashlight.”

I encourage you to also read resources on Ian Linkletter’s website, “Stand Against Proctorio’s SLAPP! (Strategic Lawsuit Against Public Participation.)” The use of surveillance technologies to coerce and oppress students, in consistently discriminatory ways, is both unethical and illegal. Yet it continues, and the economic incentives for edtech companies to energetically participate in this Orwellian Mardi Gras parade is sickening.

Follow Ian on BlueSky at @linkletter.org. His advocacy against the surveillance economy and culture is both instructive and inspiring.

Just put the finishing touches on my presentation for the Civics of Technology 2025 conference tomorrow morning: "Centering Student Voices in Resisting Surveillance". I can't believe I get to speak after @hypervisible.bsky.social! He has taught me so much. @civicsoftech.bsky.social #CivicsOfTech25

Ian Linkletter (@linkletter.org) 2025-08-01T06:34:20.641Z

Today’s Civics of Tech 2025 keynote speaker, Chris Guilliard (@Hypervisible) provided another wake-up call for me. I’m not sure if my notes from his keynote adequately capture the passion and seriousness of his ideas about surveillance technology and our moral imperative to resist these forces. We probably all know someone who is not a current user of AI technologies, for different reasons. Some are overwhelmed, some are disinterested, but some are morally opposed to their use. In ANY circumstance. Those are all important perceptions, beliefs, and choices to understand and respect. The Civics of Tech 2025 conference definitely expanded my own understanding of different perspectives on AI, and helped better prepare me for the imminent start of another school year (my 31st in education) which is sure to be full of conversations about AI, its use, abuse, and ethics.

What do you think of the idea that “all AI use is categorically immoral and unethical?” This is an important and powerful idea to explore. Perhaps it means as a current middle school teacher, I should give my students the option to “opt out” of lesson activities which include AI use? Like social media, AI technologies invite strong opinions and divergent perspectives.

Many important ideas to consider. Thanks so much, organizers, presenters, and participants in the 2025 Civics of Technology Conference! You’ve given me a lot to think about, as well as many new people to continue to connect to and learn with!

* – AI Attribution: I did not use any AI tools to compose, write or edit this blog post. I did create the included images with ChatGPT, and edited them with Canva.com.

Is All AI Use Ethical? (CC BY 4.0) by Wesley Fryer


View Entire Post

Read Entire Article