EOIR’s AI Memo: Don’t Use It on Government Computers

4 weeks ago 29

Earlier this month the acting head of EOIR published a policy memo on the use of AI by the private bar which mentioned a previously unkown EOIR policy about internal use. I had to see what it said, so I filed a FOIA request. And, today I got the policy. It’s an internal IT security […] The post EOIR’s AI Memo: Don’t Use It on Government Computers appeared first on Hoppock Law Firm, LLC - a Kansas City Immigration Law Firm.

Earlier this month the acting head of EOIR published a policy memo on the use of AI by the private bar which mentioned a previously unkown EOIR policy about internal use. I had to see what it said, so I filed a FOIA request. And, today I got the policy. It’s an internal IT security notice dated May 22, 2025.

The subject line reads: “Updated – Unauthorized use of AI services on GFEs.” “GFEs” means Government Furnished Equipment—laptops, desktops, or phones issued to DOJ staff.

What the Memo Says

The notice is short and direct:

“The unauthorized use of AI services is never allowed on DOJ Government Furnished Equipment (GFEs).”

Following that is a long list of prohibited sites: ChatGPT, Claude, Copilot, Grammarly, Notion AI, Perplexity, QuillBot, and various others. The stated reasons are data privacy, cybersecurity, and the risk of “generation of malicious content or misinformation.”

That last one stands out. It shows EOIR’s IT staff recognize misinformation as a genuine risk of generative AI. Still, the memo is clearly an IT policy, not a content policy. Its focus is protecting DOJ’s network and hardware, not regulating the use of AI in decision-making.

There is also a partially redacted line:

“The EOIR Cybersecurity team has observed [REDACTED].”

That phrasing suggests the policy was prompted because EOIR noticed staff were already experimenting with AI tools on DOJ machines.

What the Memo Doesn’t Say

This memo doesn’t prohibit judges or attorneys from using AI altogether. It only bans AI use on DOJ-issued computers. In fact, the earlier guidance memo I wrote about two weeks ago left the door open: both attorneys and immigration judges can use AI in their work so long as they verify the accuracy of the output themselves. A footnote in that memo explicitly reminded IJs of the same duty.

In other words, the official policy is: don’t use AI on government equipment, but if you use it elsewhere, you’re responsible for double-checking the results.

A Bit of Irony

In January 2025, Acting Director Sirce Owen issued a policy memorandum criticizing the prior administration for relying on “secret operational policies,” arguing they undermined transparency and even basic principles of administrative law. Yet this AI prohibition was itself an internal, undisclosed policy until now. It only came to light through FOIA.

That disconnect makes Owen’s memo ring a little hollow. If EOIR is serious about avoiding “secret policies,” why wasn’t this one shared with the public when it was issued?

Takeaway

This FOIA result shows EOIR is taking AI seriously, but mostly as a security risk to its own hardware. Judges and attorneys remain free to use these tools on their own equipment, provided they take responsibility for the accuracy of their work. And it’s left to be seen whether EOIR will issue any actual guidance to Immigration Judges on whether the can or can’t use ChatGPT or other AI tools to do things like write their decisions or summarize long documents in the record.

The post EOIR’s AI Memo: Don’t Use It on Government Computers appeared first on Hoppock Law Firm, LLC - a Kansas City Immigration Law Firm.


View Entire Post

Read Entire Article