Book review: Computational propaganda

Computational propaganda

Oxford Internet Institute is a go-to-zone whenever I need some knowledge about cyberspace, cybersecurity, Internet research or many other topics. It’s a fascinating interdisciplinary institute, blending what is called social data science, data science with social science (sociology or political science for instance), looking at algorithms, artificial intelligence, disinformation campaigns large scale. They have a score of PhD-students and scientists doing very interesting and exciting research. Occasionally, the scientists release books, such as this one: Computational propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, edited by Samuel C. Woolley and Philip N. Howard. The book comprises case studies of digital disinformation efforts (a main focus is certain types of bots) in nine countries, ranging from Canada and Poland to Russia and, naturally, Ukraine.

Ukraine was hit several times on a large scale, both by cyberattacks and computational propaganda. The Russians used bots of various kinds: impact and service bots, amplifiers, complainers and trackers. Research found that civil society drove the response, which was decentralized, in contrary to the centralized focus and power of the Russian attackers. Computational propaganda was used to manipulate opinion, sow discord, discredit various Ukrainian actors and support others.

Russia is surprisingly interesting. It was, until about two weeks ago, a country where VKontakte and Yandex competed with Facebook and Google and were the bigger actors without an askewed market. But most fascinating is that the blogosphere, and parts of social media, rely on good reporting, which results in well-built fake news. In the blogosphere posts needed to have well-founded arguments and evidence “right away, preferably with detailed, often highly technical, reports on the matter”. If that failed, hackers were brought in to expose personal mails and grievances which could be exploited against journalists or the political opposition. It meant that evidence was very important. Since 2011 the situation has deteriorated though. Perhaps the abovementioned is why the Putin regime now has completely limited access to social media, to foreign sources of information, forbidden any reporting on the war, because evidence is not to be found, not to exploited by journalists or the opposition?

As mentioned, bots are used in various ways on the Internet, and comprise a fairly large focus in several chapters, one reason being “bots […] can operate at a scale beyond humans”. In the chapter on Canada, election interference becomes an issue in the illusive question “how can free speech be weighed against foreign interference?” How can national authorities and legislation know a foreign actor isn’t buying bots to spread information in an election, or even know parties or affiliates aren’t using bots or cyborg accounts (humans and programs together) to affect the election? Julia Slupska wrote purposefully about this, discussing the fine lines of foreign interference in elections, national sovereignty, freedom of speech, the right to reflect and make choices on our own, and how liberal democracies made attempts to limit digital interference with elections. Bots complicate online speech drastically, because anyone can use bots and cyborg accounts: parties, citizens, companies, organizations. And who is to say who is a citizen, by the way, and who constitutes a foreign interest?

Taiwan has tried media literacy as a way to counter desinformation compared to, for instance, Canada. In both countries “positive” bots are deployed to fact-check news (which, by the way, is how some journalists work, by deploying bots to check facts before publishing news).

Zeynep Tufekci has written about activists and the same conclusions about them can be drawn here: human rights activists and alike are targeted and trolled with, especially public ones. When the Euromaiden protests broke out in 2014, activists were instantly barraged, with harassments and threats raining down on them. Fake accounts, bots and foreign interests makes it very difficult to know who exactly is behind the wall. Still, do people change their opinions, and if so, when?

Many of the authors have interviewed people inside various companies (PR, software developers, media companies etc), which brings an interesting insight into how fake accounts are set up, bought/sold, how bot networks work, how they track and generate data on social media users, how agenda setting and opinion targeting are really working.

Three conclusions in, and a fourth from, the book:

  1. Focus on what is said rather than who is speaking.
  2. Social media must be design for democracy.
  3. Anyone can use bots.
  4. For computational propaganda to work, it’s necessary to influence opinion leaders (on social media) and the agenda setting media. Study how Steven Bannon worked before the election to the European parliament in 2019 or watch The Brink.

If ever you find yourself in need of a deep introduction on computational propaganda, this book is a necessity.

Night and blur – The Bilinda Butchers