AI-generated influencers are here. Can you tell who’s real? | The Excerpt
Dana TaylorOn the Monday, April 27, 2026, episode of The Excerpt podcast: On the Monday, April 27, 2026, episode of The Excerpt podcast: AI-generated influencers are reshaping the attention economy, making it harder to tell what’s real online. Charlie Warzel, staff writer at The Atlantic, joins The Excerpt to discuss.
Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.
Podcasts: True crime, in-depth interviews and more USA TODAY podcasts right here
Dana Taylor:
Generative AI is making it easier than ever to create fake people on social media. These aren't just bots though. Today's versions are actual influencers with faces, personalities, and millions of followers. Some promote wellness products, others are wielding political influence. And in many cases, people don't realize or perhaps don't care that they're not real. So how is this dystopian reality shaping our future?
Hello and welcome to USA TODAY's The Excerpt, I'm Dana Taylor. Today is Monday, April 27th, 2026. Here to help unpack this evolving story is Charlie Warzel, a staff writer at The Atlantic and the author of the Galaxy Brain newsletter that explores technology, media, and big ideas. Charlie, thank you so much for being here.
Charlie Warzel:
Thanks for having me.
Dana Taylor:
Charlie, when did AI generated influencers stop being a novelty and become something that actually matters in an economy based on attention?
Charlie Warzel:
I think in a lot of ways they are still a novelty while also mattering. This is still a very new phenomenon. Throughout the entire history of the internet and the social web, there's always been fake people. A lot of times there are fake Facebook accounts of people who don't exist, people on message boards a long time ago were pretending to be people who they weren't. It's kind of a core tenet of the internet to be anonymous and role play as somebody else.
But with the advent of generative AI tools and large language models, really we've seen in the last, let's say 18 months, a leap forward in the technology that allows these AI avatars to actually be somewhat convincing. They're not always convincing, but they're much better than they were say three or four years ago, orders of magnitude better. And that means that people are using them. They're using them to create social media accounts with lots of followers to hawk products, some of which may or may not be a little on the snake oil side. This is something that I think is still a very new phenomenon. We're still just seeing the vanguard of this appear, but it is also rooted in something that has been on the internet for a very long time, which is fake people.
Dana Taylor:
Incentives to scale, influence, make money or push messaging have always been there. What's changed is tech that now makes it cheap, fast, and accessible to create these personas at scale. Does this evolution in AI influencers just end up mushrooming and making our feeds impossible to navigate?
Charlie Warzel:
I think that social media now, it's so different than what it was. And I think the media in general has evolved from being something that was niche, but very authentic to something that is extremely widespread and in a lot of ways, very, very inauthentic. So much of marketing now that happens on the internet is this wild volume game where people are essentially spamming. You and I would think of it as spamming a feed with as many different versions of the same thing as possible. You're trying to sell a product and you are going to record a video of you with that product talking in different backgrounds, with different emotions, talking 500 different ways about it in order to find the thing that is most pleasing to the algorithm and that hits with an audience and that goes viral and is seen by the most people.
So people are just getting this enormous number of at bats in the hopes of trying to find the thing that sticks. And what these synthetic influencers, these AI avatars do is they make it so easy to set up and prompt a bot and a large language model to create a person who's going to say all these different things all these different ways. You don't have to pay an influencer to speak a hundred different ways about a product. You get to iterate on the order of your keyboard and see how that goes. And so for marketers and for people who are just trying to play that volume game, this is an extremely useful tool.
Dana Taylor:
Are we looking at a shift where an emotional connection with a synthetic persona can outweigh authenticity?
Charlie Warzel:
I think this remains to be seen, but what I have noticed with certain influencers, there are certain times when someone will stumble upon an AI influencer and they will feel wronged. They will feel like, "Oh, I was duped, I was scammed if this company, if this person, if this influencer, if they can't even deign to put a real human being out there in front of it, why should I care?"
And so there's, I think still a feeling of, I do not want to engage with something that isn't authentically human. On the flip side of that, I have seen people say that the whole point and purpose of an influencer is a life that is aspirational in some way, right? Someone who has access to products that you would never get to have or access to experiences or lives a luxurious lifestyle that is out of reach for most people.
And that unattainability translates over to the idea of something synthetic, right? What is the difference between someone who is traveling around the world and sitting, sipping cocktails in infinity pools when you're making an hourly wage and grinding yourself down? What's the difference between that lifestyle and something that's totally fake? Some people in that sense maybe don't care.
And I think one thing that we've seen with the use of generative AI out in the world, we've seen it sadly in politics, is that when something reaches you in your feed and provokes an emotional response in you that you either feel is righteous or you enjoy or gives you some kind of pleasure, a lot of people just don't care what the providence of that is. If it feels right, if it feels true, if it is useful to me to then go share, some people don't care. And so I think we're really in a space where it's unclear at the moment how people will react to this lack of authenticity.
Dana Taylor:
We're seeing examples like the so called MAGA girls where AI generated personas are building real followings and even making money. What does that tell us about how easily influence can be engineered?
Charlie Warzel:
It's very cheap to engineer this type of influence. You can pay these companies that essentially are an updated AI version of click farms to create all kinds of different personas. They have all kinds of different identities, right? It's almost like in a video game where you get to choose your character. I mean, you can have people with different sexual orientations, different accents, lisps. You can have facial features and blemishes and all kinds of things to customize exactly the way that you want to have it. And because it's so cheap, you can flood the zone with all of that and you can tailor it.
And I think in one way, I've heard that, especially online with younger people, there's a lot of experimentation here, right? Someone just creates an avatar for themselves like you would in a video game like the Sims or something, and it's a way that they play around with issues of identity and it can be experimental in a positive way.
And then there's the way in which these different identities are being used in political settings for propaganda. Tiffany Hsu at The New York Times reported a story recently that was very interesting, that there are a lot of these synthetic AI MAGA influencers out there purporting to be Trump voters who are posting on the internet in favor of the administration's war in Iran, which is almost historically unpopular. And so this is being used, these fake people are being used to signal to other people that there is support for this war when there really isn't. So in that sense, that is a very alarming situation where it's essentially this propaganda tool that's made to justify violence on behalf of the United States, and I think we're going to see a lot more of that.
Dana Taylor:
As I mentioned, a lot of these AI influencers are showing up in health and wellness spaces as well. Why is that ecosystem particularly vulnerable to synthetic people and misinformation?
Charlie Warzel:
Yeah. The health and wellness space is always, especially when you get into the world of supplements, it's poorly or in some cases not regulated at all. It's a little bit of a wild west in general, and it tends to attract a lot of people who are in the business of getting rich pretty quick. Historically, a lot of political grifters and folks like that get into the supplement and the wellness space sometimes as a way to fundraise on the outsides of that.
The conspiracy theorist Alex Jones, who runs the company Infowars, he built up a giant store of supplements and vitamins that are supposed to protect you against radiation or boost your testosterone or whatever, and that was a way to fund his conspiracy media empire. You have this industry which is always looking for this leg up and also doesn't have that kind of regulation around it, and that makes it a very fertile ground for something like these AI influencers. I think it's a perfect marriage of form and function in that sense.
Dana Taylor:
These AI personalities, they've evolved from selling products or still doing that, of course, to pushing political messaging at scale. When you think about the red flags that you've already talked about, when you think about 2016 and these Russian troll farms that were used to fake online personas to influence conversations about politics, and then as we head toward the 2026 midterms, do AI generated influencers pose a similar risk here, or is this something fundamentally different?
Charlie Warzel:
I think what we learned about the Russian troll farms, the 2016 election, is that those fake or synthetic articles or the social media accounts, those didn't do, or at least there's not a lot of evidence that they were extremely persuasive, right? That people wanted to vote for Hillary Clinton in the 2016 election and then they got yelled at by a bunch of online trolls and all of a sudden cast their vote for Donald Trump, so it's not really about persuasion, and I feel like these AI avatars represent the same thing.
It's much more about augmenting and getting into fights that already exist, the polarization that already exists inside the American electorate and inflating it, right? Making everything worse, making people feel if you feel like you have a political enemy, those AI avatars can come out and be the worst version of that person and then just make you hate the other side even more. They inflame the tensions that already exist. And I think that that is the biggest threat we have here.
Now, I think it's entirely possible that we as human beings will learn to get better at spotting these things. Unfortunately, there's a bit of an arms race. These things are getting more realistic as we get more used to having to think about spotting them, but I think that we shouldn't ever worry about this as brainwashing people. I think the real worry is that bad actors use it to inflame the tensions that already very clearly exist and make the political conversation that much more hostile to different people, that much less edifying and enriching, and either cause people to drop out or cause them to continue to fight and push us further into this political dysfunction that we live in.
Dana Taylor:
Right now, disclosure rules and enforcement seem inconsistent at best. Are platforms equipped to handle this onslaught or they structurally incentivize to let it proliferate?
Charlie Warzel:
A big trend that we've seen since let's say 2023, 2024 with a lot of the online platforms is a lot of them have gotten rid of a lot of their trust and safety teams. A lot of them have rolled back rules around content moderation. There's a real hands-off approach in Silicon Valley. And a lot of that has to do with catering to the desires of the current presidential administration. And as a result of that, there's really let it rip attitude here. These platforms are the Wild West, look out for yourself.
The only thing that is really, really enforced is copyright. So if you are infringing on someone's copyright, if you create an AI avatar of an IP character, some business is probably going to flag it and then it's going to get taken down. But if you create an average Joe person who is just trying to harass or troll somebody or hawk some kind of snake oil product, chances are it won't be taken down. So I don't see... I think at the moment, regulation can't keep pace with the proliferation of these programs.
It's the way that generative AI is getting better is at such a rate that lawmakers are unable to do this. I would say that the biggest issue with this, as we were talking about, is the ease and the speed that you can make these things, right? If you can spam the internet with these, lawmakers and people who are trying to take these things down have to go one by one, whereas I can create an entire army of synthetic influencers to just go out and spread my message across a ton of different platforms. It's a whack-a-mole game, but it's not really a fair fight.
Dana Taylor:
In the recent past AI-generated images and videos were fairly easy to spot. Some may recall that famous Mother's Day picture of Princess Kate Middleton of Wales and her children. This was in 2024 where some of them had physical abnormalities. You could just see that it was faked, but still managed to dupe a lot of people. The technology has gotten more sophisticated since then though. What should we look out for or consider before engaging with an influencer that could be AI?
Charlie Warzel:
Right. It used to be the fingers, the hands were a dead giveaway, right? If there was too many fingers or not enough fingers or something like that, we're well beyond that. Things to look for, I think Tiffany Hsu, the journalist at The New York Times who's done such great reporting on this, recently walked me through how do you spot some of these AI avatars or these AI videos? And things to look for are, if you can, zoom in and look for the eyes. Sometimes you will see a reflection that doesn't really make sense, right? It'll look like if you zoom into the eyes, it'll look like they're indoors when they're really outdoors in the video. Lighting in general is great to see this. Sometimes you'll see a sun in the background in the video, but they're front lit in this way that doesn't really make sense.
Sometimes what's really great is to look at things like if they're holding something, if they're holding a cup or something like that, the way that it sits in their hands or the way that if it's... There's a fake Amish influencer that Tiffany was walking me through and she was holding a Costco rotisserie chicken. And this is gross, but the Costco rotisserie chicken was dripping in this very weird, unnatural way, it was like honey coming off the chicken. And you wouldn't notice that if you're just scrolling through in your feed, right? But in the moment, if you actually zoom in on that, you go, "Well, that's not how a cooked chicken behaves." And so it's looking for those little clues. But of course, the fear is over time, these things get better and better and better, and we have to update our heuristics for how we think about this.
Dana Taylor:
It is fascinating. Thank you so much for joining me on The Excerpt, Charlie.
Charlie Warzel:
Thanks for having me.
Dana Taylor:
Thanks to our Senior Producer Kaely Monahan for her production assistance. Our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to [email protected]. Thanks for listening. I'm Dana Taylor. I'll be back tomorrow morning with another episode of USA TODAY's The Excerpt.