Center for Strategic Communication

Not that kind of troll. Photo credit:  Kai Schreiber via Flickr

Not that kind of troll. Photo credit:
Kai Schreiber via Flickr

If you’ve spent any substantial time on a social networking site, you’ve likely encountered an anonymous troll. They may mock something you’ve said, or a photo of yourself or others that you’ve shared. Then again, maybe they’ll say nothing of substance at all, churning out a slew of profanities or insults. Sometimes they hit below the belt; other times they’re easy to swat away and ignore. Either way, a troll’s purpose is inherently ethereal — its raison d’etre can be shattered by the click of a “block” button.

Much ado has been made about the psychology of trolling — and for good reason. We store a lot of our lives online — photos, private correspondence, biometric data, tax returns. We spend the rest of our time in spaces that we have collectively designated as a digital commons. These virtual public spaces are governed by rules, explicit or otherwise, of their own. Like any crowded physical space, these regions can be noisy, confusing, and easily subjected to disruption. It’s the ideal space for getting your message out so long as you don’t particularly care about being heard. Think of it as like screaming at a rock concert: It’s annoying for those people nearby, but completely ineffective if you want to convince the crowd to do anything .

Those endeavors may be largely fruitless, but they have gained a great deal of ground in one country: Russia. Here, the troll as an agent of information warfare on behalf of the state has garnered a great deal of attention since Putin’s invasion of Ukraine. If recent revelations are any indication, the well-oiled, Kremlin-sponsored troll machine has no intentions of closing up shop anytime soon.

State-sponsored or state-sanctioned Internet trolls are nothing new on the Russian Internet — or RuNet, as it is often called. In 2012, a series of emails published by a Russian hacktivist group showed a youth group with ties to the Kremlin was paying bloggers and journalists to post pro-Putin content online. Activists were also paid to down-vote YouTube videos posted by the opposition and to even leave hundreds of comments on news articles with an anti-Putin spin. The leak was huge, but the practice was nothing new. Indeed, a Freedom House report in 2013 noted that “Russia [has] been at the forefront of this practice for several years.”

But the practice became even more critical to the Kremlin’s informational warfare strategy during the invasion of Ukraine in 2014. One firm, called the Internet Research Agency, garnered a great deal of mainstream media attention last year after a major document leak exposed the agency’s operations.

In June 2014, Buzzfeed reported that the Kremlin had poured millions into the agency so as to fund a veritable army of trolls to post pro-Putin commentary on English-language media sites. Commenters were also expected to balance several Twitter and Facebook accounts while posting over 50 comments on various news articles throughout the day. A more recent account described a heavier workload: Over the period of two 12-hour shifts, one employee was expected to draft fifteen posts and leave 150-200 comments.

“We don’t talk too much, because everyone is busy. You have to just sit there and type and type, endlessly,” one former Russian troll told Radio Free Europe/Radio Liberty a couple of months ago.

“We don’t talk, because we can see for ourselves what the others are writing, but in fact you don’t even have to really read it, because it’s all nonsense. The news gets written, someone else comments on it, but I think real people don’t bother reading any of it at all.”

If they were only trolling comment threads, that’s likely true. Many readers (and writers, sorry) skip the comments. Head over to your favorite mainstream media news site and read the comments on any given article. On occasion you’ll find some gems among the weeds of trolls and spam bots, but they can be few and far between. A paid Russian troll would be just one voice among many.

The new age of information warfare may have started out on comment threads, but its biggest battles won’t be fought there. If recent events are any indication that shift has already begun.

According to a recent account by reporter Adrian Chen in The New York Times, the Internet Research Agency may be behind several larger hoaxes throughout the United States. The first engineered a fake chemical spill in St. Mary Parish, La., through a coordinated social media campaign and text message alerts. This “airborne toxic event” of sorts had media coverage and eyewitness testimony. None of it, investigators soon realized, was real.

Months later, many of the same accounts used to spread the news of the fictional chemical spill reported an Ebola outbreak in Atlanta. Others told of a shooting of an unarmed black woman, again in Atlanta. At first glance, none of these three events appeared to be related, although two videos — the first one documented the Islamic State in Iraq and Syria’s (ISIS) apparent involvement in the chemical spill and the other the shooting of the unarmed woman — appeared to have the same narrator.

Chen’s account should be read in full, not summarized. Nevertheless, it does raise a few important questions. For one, are these hoaxes the new face of the 21st century information war? It would appear so, if only in for the short term. Will technological developments in image manipulation make conning easier? What about an increase in the number of social media users? Probably. In the latter case, though, it could swing either way.

In the end, the most important question is one that we need to continuously ask ourselves: What am I, as a responsible Internet user and media consumer, doing to protect the integrity of the web? Ignoring the troll(s) screaming in the crowd is a start.

This post also appeared at The Eastern Project.