AI

How to Identify Misinformation and Bots on Social Media in the Age of Generative AI

17 September 2024

|

Zaker Adham

With the upcoming election season, bots are flooding social media, and recognizing them has become increasingly challenging. However, once you know the signs, spotting bots can become second nature. Dive into any heated political thread online, and you’ll often notice patterns of bot activity—such as identical comments from different users or mass attempts to debunk a particular post that might harm a specific candidate’s reputation.

What’s harder to detect today are the troll bots themselves. Back in 2016, when foreign accounts attempted to influence the U.S. election, they were often easy to spot—new accounts with no followers, strange names, and no photos. However, bots today have evolved. They often have followers, profile pictures, and polished grammar, making them appear more human. This evolution coincides with more widespread political disinformation that is harder to combat, thanks to the rapid advancement of generative AI.

AI-Generated Political Misinformation on the Rise

In the 2010s, bots were rudimentary and easy to block. Today, AI-powered bots are far more sophisticated. They can hold conversations, craft well-structured arguments, and even persist in discussions when challenged. The danger lies not only in their ability to spread disinformation, but in the polarizing effects they have on social media, as noted by Lance Y. Hunter, a professor of International Relations at Augusta University. According to Hunter, social media fuels environments ripe for anger and hostility, which can lead to real-world political violence.

Now, with generative AI entering the scene, the landscape of disinformation is changing dramatically. AI is used to spread fake political endorsements, baseless rumors, and fabricated news stories. For example, during a recent political event, AI-generated audio mimicked President Joe Biden’s voice, falsely instructing New Hampshire voters to stay home on election day. These AI creations are increasingly realistic, making it difficult to distinguish between fact and fiction.

Generative AI: Making Bots Smarter and Cheaper

The spread of misinformation isn’t just a foreign threat anymore. With AI technology becoming more accessible, creating bot-driven campaigns no longer requires advanced coding skills. Platforms that enable users to build bots for spreading disinformation are now available for as little as $20 a month. This ease of access, combined with social media platforms like X (formerly Twitter), which now allow more extreme speech, creates a perfect storm for disinformation.

Bots and disinformation don’t just affect elections—they’ve been used to attack public figures, promote criminal activities, and sway public opinion on non-political topics. Social media platforms like TikTok, Facebook, Instagram, and Reddit are also hosts to such disinformation.

The Growing Challenge of Spotting AI-Generated Content

As AI-generated content becomes more sophisticated, it’s becoming harder to spot fakes, especially for the average social media user. Early generative AI images were often distinguishable by oddities such as distorted hands or garbled text. Today, these issues are becoming less common, making it increasingly difficult to tell real content from AI-generated creations.

AI’s ability to generate human-like text is also a growing concern. In the past, bot campaigns required real humans to write convincing posts. Now, AI can generate these posts almost instantly and with greater complexity, making disinformation campaigns even more effective and harder to counter.

What Can Be Done?

Despite advancements in bot detection tools, they are becoming less effective as AI technology evolves. Tools like Bot Sentinel, once relied upon to flag suspicious accounts, are struggling to keep up with more advanced bots that can evade detection.

To combat this growing threat, experts suggest adding transparency measures to social media platforms, such as requiring watermarks on AI-generated content. This would help users better differentiate between real and AI-generated images and videos. However, achieving this will require cooperation between tech companies, lawmakers, and social media platforms.

In addition to spreading disinformation, bots can also execute other malicious activities, such as credential-stuffing attacks that steal user information. Unfortunately, as Antoine Vastel from DataDome points out, we can’t rely on social media platforms alone to protect us, especially when even basic scams, like cryptocurrency schemes, remain prevalent online.