Sunday, July 30, 2017

why we missed Trump's victory

As a fully frocked political scientist with some modest campaign experience, I watched last year's presidential contest with special attention to campaign organizations, spending, and themes and the resulting polling results. I knew that polls nowadays are failing to reach and get responses from over 90% of the people they try to contact, so I paid greater weight to aggregators.

I couldn't believe that Trump could win if he failed to run a traditional campaign of heavy media spending and strong organization, especially for getting out the vote [GOTV]. He didn't do either on a national scale, and even his swing-state efforts seemed modest -- though it was obviously enough in a few rust belt states that gave him electoral college victory.

Now I understand that I -- and we in the observer class -- missed where the Trump campaign was supercharged: on social media, often driven by bots. I don't follow or participate in social media, so I missed it, just as campaigns in earlier decades missed the new tactic surprise winners used.

There's new academic analysis of the role of the bots, as summarized here, with a link to the journal article, by Columbia Journalism Review:

Bots played a huge role in promoting the spread of misinformation and disinformation during and following the 2016 campaign season. A just-released study from a group of scholars at Indiana University in Bloomington, which analyzed 14 million Twitter messages, finds that bots were more likely than humans to be “super spreaders” of fake stories, playing a crucial, early role in making certain stories go viral. Bots spread the stories through tweets, retweets, replies, and mentions. Bots will repetitively add @realDonaldTrump to a tweet with a false claim, for instance, to propel false stories to surface more often.

While this is not a eureka moment in the battle against fake news, it is compelling evidence that so-called “bot armies” are a cornerstone of the misinformation strategy. The study’s findings also suggest a path forward for the fight against fake news. If bots are able to take advantage of an ecosystem that rewards their sheer force of numbers, then disabling overactive bots would be a way to slow the spread of such claims. Researchers, including those at Indiana, are developing more reliable ways to detect whether a particular account is a bot or not. The study I described above relies on a tool called a Bot-o-meter, which scores Twitter accounts based on how likely they are to belong to a human.

This story also comes after the recent controversy in the Gulf between Saudi Arabia and Qatar, in which bot armies promoted smear campaigns against Qatar and Al Jazeera. As David Carroll pointed out: “[Bot armies are a] global problem deployed as info ops against media outlets.” provides a fascinating list of propaganda bot networks and hashtags involved.
Now we know where else to look for campaign information, but not what to do about it.

No comments:

Post a Comment