What are troll farms? How do they spread disinformation online? What can be done to stop them?
Governments are hiring troll farms to flood social media with disinformation, with the aim of suppressing opposition, sowing division, and ultimately threatening democracy. Social media companies have taken steps to end state-sponsored trolling, but some say it isn’t enough.
Keep reading to learn more about what troll farms are and why they’re harmful.
While government propaganda is nothing new, technology has made it easier for countries to spread disinformation, harass critics, and influence the outcome of elections. One report reveals that 30 out of 65 countries make use of state-sponsored trolling—governments paying people to peddle a misleading narrative en masse. They’re able to do this with the help of troll farms, also known as troll factories or troll armies—but what are troll farms, exactly?
In this article, we’ll discuss what troll farms are, how they operate, why they’re harmful, and what can be done to fight them.
Troll Farms 101
Troll farms are professional groups of people who create fake online profiles to populate social media and internet forums with a predetermined message—for example, praising a particular politician or attacking someone critical of the government. They work in coordinated fashion by sharing or commenting on each other’s posts, giving the illusion of a widely held point of view. In some cases, trolling is a service offered by legitimate advertising and public relations firms.
This kind of trolling works especially well on Facebook, which has nearly 3 billion users and an algorithm that rewards popular (but not necessarily factual) content by pushing it onto more people’s news feeds. The troll farms’ tactics have been so successful that, leading up to the 2020 election, their content reached 140 million U.S. users per month.
Troll farms typically aren’t driven by politics but by economics; they’re often located in developing countries where labor is cheap and people are ready and willing to take on paid work. The work involves tasks like creating fake profiles or buying old Facebook accounts, working 12-hour shifts, and posting over 100 comments a day. Since troll farms may operate outside of the countries they target, workers might also spend time studying and refining their writing style so that they appear to be locals.
Which Governments Use Trolling?
By sharing misleading posts and leaving hundreds of comments a week, trolls aren’t just being annoying—they’re weakening democracies. Their behavior interferes with citizens’ access to legitimate information, enables self-serving politicians to stay in power, silences opposition, and promotes divisiveness.
Some examples of governments that have used troll farms are:
Russia: Russian troll farms often employ a divide-and-conquer tactic—as early as 2014, trolls were already pitting Russia against Ukraine to aid in Russia’s annexation of Crimea. Russian trolls also pose as Americans and post inflammatory comments on hot-button issues like gun control and immigration, getting Americans to turn on each other.
The most widely known Russian troll farm is the Internet Research Agency (IRA), which was found to have interfered in the 2016 presidential election. U.S. Cyber Command reportedly dismantled the troll farm in 2018, but it’s unknown whether it continues to operate surreptitiously.
Philippines: Former president Rodrigo Duterte admits to using a troll army during his 2016 presidential campaign to spread positive messages about him and negative messages about his opponents; he subsequently awarded his most popular trolls with government positions. Pro-Duterte trolls also harassed Maria Ressa, a Nobel Prize-winning journalist whose news company was critical of the Duterte government.
His successor, Ferdinand Marcos Jr., similarly won the 2022 presidential election with the help of viral—and deceptive—social media posts, which sought to revise history, rehabilitate the Marcos name (his father was a deposed dictator), and paint his main opponent as incompetent.
India: A former troll describes how “social media volunteers” were given several Facebook accounts and mobile phones to attack opponents of prime minister Narendra Modi. Their social media campaigns reportedly escalated into hate speech and rape and death threats.
Ecuador: Leaked papers reveal that a PR firm proposed charging the Ecuadorian government a monthly fee to run a “troll center” that would neutralize anti-government narratives online. Other documents show that the country’s own intelligence personnel were involved in trolling journalists and political opponents.
Venezuela: People could sign up to become trolls at Candanga Points, or government-sanctioned booths set up in town squares, in exchange for food coupons. Trolls’ tasks included spreading negative messages on the Telegram messaging app about President Empresas Polar’s critics.
Steps to End Trolling
In recent years, social media companies have stepped up their efforts to take down trolls. For example, since 2018, Facebook has been releasing regular coordinated inauthentic behavior (CIB) reports—its September 2022 report highlights the removal of two unrelated networks from China and Russia, spreading disinformation about the war in Ukraine, among other topics.
However, such efforts may not be enough. One Facebook employee-turned-whistleblower argues that the company still hasn’t really changed the way it does business—largely acting in the interest of profit—and that CEO Mark Zuckerberg doesn’t hold himself accountable. (For his part, Zuckerberg denies the whistleblower’s allegations that Facebook puts profit over the public good, while adding that the government needs new internet regulations.)
Given the volume of disinformation coming from around the world, simply relying on social media companies to recognize and take down every suspicious page that crops up is unrealistic. Thus, we must consider what else can be done to fight troll farms.
- Enact Laws to Prevent the Coordinated Spread of Disinformation: One expert stresses the need for strong laws in three areas: personal data (which is fodder for social media’s algorithm), antitrust (introducing more competition to give social media companies greater incentive to serve quality content to users), and content liability (making social media companies more accountable).
- Improve Digital Literacy: Schools could educate students (and companies can educate employees), giving them the tools to be more aware of disinformation and more discriminating when it comes to what they see on their news feeds.
- Develop More Sophisticated Programs to Hunt Trolls: Governments and organizations can consider working with a third party like Trollrensics, which uses data mining to track and dismantle troll armies.
- Consider “Positive Trolling”: While many advertising and PR firms work on changing the narrative for political clients, some have made it their mission to counter the disinformation through “positive trolling” (also known as “white trolling”).