Someone released an AI model that makes deepfakes of me, without my consent

5 hrs ago 6

TW: reporting and discussion of AI deepfakes, non-consensual imagery, online harassment, and suicidesDuring a call discussing deepfakes, I was telling someone that thankfully, there weren’t any deepfakes of me out there—when I discovered that someone had released an AI model that makes deepfakes of me. This wasn’t just a few pictures, but a whole custom model that anyone can download and use to generate thousands of pictures of me—saying or doing things I’ve never said or done. And yes, it can generate NSFW images. On the website where the AI model was made available, a series of sample images showed deepfaked “me” wearing work outfits similar to those I’d worn before in public. Some images made AI “me” looking like a 14-year-old, wearing tshirts with defamatory texts. Some showed me looking like I did in my late teens, a friend said, “wow, it really looks like you did back then”. The ickiness and revulsion of it all. This is a sickness. They also mocked the copyright win I had in Luxembourg by making pictures of deepfaked “me” in the same pose as the model in my work. Even though I won the lawsuit in the end, I initially lost the first instance due to a judge saying my pose lacked originality. It would become a harassers’ favorite activity over the next few years—to make replicas of the original through AI or mutilation of the original image, to show how ‘easy’ it was to do, to prove how worthless my work was, how worthless art was. Just unadulterated hate and harassment, knowing that there was nothing I could do about what they were doing to me. Left: my original. Right: deepfake made with my face in similar pose. The user generated it via text prompt. I spent 2 years fighting and enduring vitriol about my gender, my race, my job—that photography wasn’t art, that I deserved to have my home address doxxed, that I’d kill myself. Someone said I should just go be a sugar baby because I was taking something away from a man—all because I tried to stop a guy from stealing and taking credit for my work. The tiniest, tiniest silver lining I could have here was that maybe, at least, this wasn’t a NSFW model. But I was mistaken. The model/app can in fact be prompted to make NSFW images featuring my face despite not being a NSFW model, and these apps can be downloaded directly from the app store for everyone to use. The ease in and of itself is horrifying. And so the list of things we can’t do anything about grows ever larger, thanks to AI: improving crimes at lighting speed, scaling malice at a rate unprecedented in human history. Many apps in the app store support similar, popular AI models, which allows people to generate explicit NSFW images with just text prompts. Growing CrisisYou might think that hopefully this doesn’t happen often. But sadly, that isn’t true. In a 30-page report on deepfake nudes amongst young people by Thorn, 1 in 8 minors said they know someone who has been deepfaked. Thorn’s March 2025 R: Deepfake Nudes & Young People: Navigating a New Frontier in Technology-facilitated Nonconsensual Sexual Abuse and Exploitation In South Korea, women and girls are finding their pictures taken and used for deepfakes without their knowledge or consent. People’s lives are being destroyed, and no one has any idea what we can do.     Even before generative AI, image-based harassments could lead to devastating outcomes. Sextortion—where scammers threaten to distribute intimate images of victims—is the number 1 “fastest growing cybercrime targeting children in North America ... most commonly exploits young men ... between 13 to 17", according to a reporting from USA Today.At least 30 deaths by suicides were connected to sextortion in 2021. “These teenage boys were blackmailed online – and it cost them their lives”—USA Today With AI’s ability to “increase efficiency” to make crime grow at unprecedented scale, there is little attention being paid to the safety and sacrifices in the name of this efficiency.Deepfakes Made EasyUnlike in the past, where creating fake images took perpetrators hours just to make one, AI is advancing so quickly that today, it's become trivial to make a custom AI LoRA model like the one made of me.Within the same app, a person can both create a model to share with fellow harassers, or download one made by someone else. Importing a custom model is like downloading a browser extension. For some apps, 1 photo is all you need to even turn it into a video: The above example was from searching Twitter for “video single photo ai”, plenty more examples if you want to look. The Legal RoutePeople like to say "just sue them" very easily. But in practice, it’s extremely difficult to do so. Take my experience for example: the copyright lawsuit took 2 years of my life. I couldn’t work because of the mental toll, but also because any time I had, I had spend it all to learn how to become my own legal intern.When my home address was doxxed, I couldn’t file a police report. A criminal case being filed at that point would take precedence over the copyright case, thereby delaying things further. I was already in hell, so of course I didn’t want things to drag out longer. I couldn’t file the police report in the end. That’s the reality.10-Year Ban On States Regulating AIThere are countless unfair and complex reasons why you can’t simply sue for something. And due to the lack of AI regulations, you have to also consider that new harms from AI may not yet be considered illegal.Example: the US just passed a nation-wide law criminalizing revenge porn. Which means up till now, depending on the state you lived in, it was legal for someone to share explicit photos of you, and there would be nothing you can do if it destroyed your life.This federal law comes 21 years after the state of New Jersey first passed something similar to protect its residents in 2004. TWENTY-ONE YEARS.   House Republicans Push 10-Year Ban on State AI Regulation—Government Technology   Regulations being standardized at federal level makes perfect sense if they didn’t take so long to happen. If US states get banned from regulating AI now—what are we supposed to do in the meantime? Watch new AI harms destroy our lives for the next 21 years? We shouldn’t have to accept lives being ruined as a new normal, just so AI can develop at breakneck speed.What I and every woman and child who experience should not be an acceptable cost of this technology.If governments allow AI development to continue unchecked, recourse and protection for people coming to harm due to AI must be provided to every individual with haste.You cannot place the burden of dealing with these harms exacerbated by AI on victims who had no choice but to be subjected to what’s happening. Not when it’s due to irresponsibility and a refusal to regulate for safety and protection of the people. If life-saving medicine requires extensive testing before release, why does life-ruining technology have little to no guardrails and oversight? They say OSHAs were written in blood—I don’t want to see the pains of today turn into an unstoppable crisis we’re forced to live with and accept as a new normal. There shouldn't be a number of lives destroyed that's required before governments ask AI companies to do better. Regulating AI to be built safely needs to happen now.I’ve spent a decade talking to researchers about deep learning, long before the diffusion models of today. I wanted to see the good in AI and all the promises we’ve dreamt about over the years.So while I could “adapt” if I wanted, I don’t want a world where countless innocent people have to serve as sacrifices on the altar of efficiency. I don’t want to adapt to a new world where it’s ok to make surviving harm the new standard of living.Today, someone made an AI deepfake model of me. Tomorrow, it could be you, your child, or someone you care about. I am sharing this because I am upset, yes, but also because there is still time to act: start practicing good digital privacy hygiene, safeguard your personal photos and data from AI tools, learn about AI capabilities and issues. Support those fighting for safety regulations and representatives who still care about protecting people from harm.The longer we stay silent, the more it will protect those who cause harm, the more the effects and dangers will bleed closer to our own lives. The time to speak up is now, and I implore you to do so, please.


View Entire Post

Read Entire Article