323
Audio & Video Production311
Software Development237
Automation & Workflow202
Marketing & Growth183
AI Infrastructure & MLOps145
Writing & Content Creation189
Data & Analytics112
Design & Creative146
Photography & Imaging140
Customer Support118
Voice & Speech129
Sales & Outreach110
Education & Learning118
Operations & Admin78
AI-generated social media personas are gaining large followings with conservative-themed posts, often to sell adult content, not to influence voting.
In short: AI-generated “influencers” are attracting large conservative audiences on major social platforms, mostly as a way to make money.
Some social media accounts that look like real people are actually made with AI. They post videos and photos on TikTok, Instagram, Facebook, and YouTube. The content is often framed as patriotic and strongly pro-Trump.
One widely shared example is “Jessica Foster,” a glamorous “patriot” sometimes shown in a military uniform and pictured with political symbols and public figures. The images can look very realistic at first glance. But there are also telltale mistakes, like odd-looking hands, incorrect uniform details, and slogans that change from post to post.
According to reporting, many followers, often conservative men, interacted with the account as if it were a real person. Later, the account was linked to an AI-made persona that pushed viewers to OnlyFans, a paid subscription site, for adult content, including foot fetish posts. The reporting described earnings of about $300 per post.
A key point is motivation. In this case, the main goal described was monetization, not a proven attempt to target voters or change election outcomes. Some commentators worry that similar techniques could be used for political influence by governments or campaigns, but that is speculation and not confirmed in this specific example.
As elections approach, concerns are growing about AI-generated media used to mislead people, sometimes called deepfakes (fake videos or audio that look and sound real). Viewers can look for platform labels about manipulated media, which some states require in political contexts, and they can double-check claims with multiple trusted sources. The harder part is that AI content is getting better, so obvious visual glitches are less reliable than they used to be.
Source: NYTimes