Tag: Computational propaganda

  • Book review: How to lose the information war

    Book review: How to lose the information war

    I first noticed Nina Jankowicz while reading the report Malign Creativity: How Gender, Sex, and Lies are Weaponized against Women Online. However, I didn’t know Nina was specialized in Central and Eastern Europe, that she has been stationed in Ukraina and knows Russian (thus also being able to understand Polish, Czech and Slovak). Her second book is focused on that same geographical region and, as the title implies, information warfare, directed by Russia. But she weaves the information war of the Czech Republic, Estonia, Georgia, Poland and Ukraine with that of the US, and concentrates on the way to loose information war, but also how to try and tackle it.

    “With the advent of the internet and social media, individual citizens are now ‘news’ outlets themselves.” This fact countries like Russia uses against democracies in order to spread false narratives. In the introduction Nina gives us a more thorough dive into The Mueller Report about Russia’s interference prior under during to the presidental election of 2016. It was far more insidious and elaborate than arranging one protest and counterprotest at the same time and location. The Internet Research Agency (IRA) managed to run popular Facebook pages like Blacktivist and Being Patriot, as well as arrange unseemlingly fun and popular protests in Washington D.C.

    Nina takes us to five countries that in different ways have tried, and are trying, to fight against Russian information warfare: the Czech Republic, Estonia, Georgia, Poland and Ukraine. In discussions with government officials, politicians and alternative media, she paints a picture of the different ways these countries try to combat Russian interference and pressure. These could provie the US with lessons on how to lose the information war.

    The lesson of lessons

    When it’s in front of you, it’s completely obvious. You ask yourself why you never saw it or verbally was able to say it out loud. Nina does just this. In the chapter of Estonia, she delves into the issue of the Russian minority, how it’s discriminated against and can’t be part of the Estonian society. This Russia uses to its advantage, to cast doubt on the Estonian government and majority. How to solve?

    Whenever we discuss issues related to technology, we tend to see technical solutions. Probably because the tech industry wants it no other way. Probably because we are entranced by technology, living in a technoreligious society, believing in technology as a good force in itself. So, why not simply throw in a tech solution to a tech problem? Like she writes: “How can any administration that intends to protect free speech censor the authentic opinions of its own citizens?”

    Why not solve this societal issue with a societal solution instead? Simply put: restore trust in government, give the minority chances to become part of the society as a whole. Try not to evoke bad feelings and animosity between people, heal the rifts. Two important pillars of media literacy (that Taiwan has tried) are schools, as in Finland, and public libraries and the powerful information and searchability librarians hold to guide citizens in the endless stream of information and literature. Thus Russia can no longer use this issue to splinter relations between people and create even bigger rifts. Because one thing Russia does is never to invent new issues, but use the old societal problems to sow discord and splinter society and the nation.

    Downsides

    Four downsides with the book:

    It was published just after Joe Biden was installed as president of the Unites States, thus missing the Biden administration’s take on cyber warfare, dual-use technologies, spyware and transnational repression. It differs from previous administrations.

    It was published one year before the Russian war against Ukraina in 2022, which renders some of the politics described obsolete. For instance, Estonia has once more turned more suspicious of the Russian minority, meaning that, for instance, the chapter on Estonia is not up to date, although it’s still relevant as a historical lesson. Settings for information warfare have changed rather drastically in one year.

    Somehow, I really dislike fictional writings “capturing” a technology and its implication in the present or future. Carissa Véliz does it in Privacy is power. Nina does it, and it’s erroneous, partly because it’s written before Biden’s presidency, partly because it’s the usual bleak, dry, predictable onset to an issue now, set in 2028.

    In the chapter about Ukrainian efforts to provide positive aspects of Ukraine in the Dutch election about EU-legislation should have been problematized more. Even though the Russians seemed to have played a part in negative campaigning, the Ukrainian part could also be considered foreign interference in an election. Julia Slupska’s piece on election interference is well-worth a read.

    Summary

    The book is true to its’ title. Information warfare pervades the book, and it doesn’t confuse information warfare with espionage or cyber warfare. Terms here are very important and so are the differences between them. Although Russia is the focal point, which narrows the scope of information warfare, that’s an advantage here. To write about information warfare in general or include Chinese, Iranian, American or any other country, would water it down. One can’t cover everything to make a topic or an issue interesting.

    Lessons from the book are important and relevant. Countries must learn from one another, can’t hide from information warfare, and develop a battery of counter measures. And those counter measures are seldom technological, but rather societal, economical and political. That’s the most important things I learned reading this book.

  • Book review: Computational propaganda

    Book review: Computational propaganda

    Oxford Internet Institute is a go-to-zone whenever I need some knowledge about cyberspace, cybersecurity, Internet research or many other topics. It’s a fascinating interdisciplinary institute, blending what is called social data science, data science with social science (sociology or political science for instance), looking at algorithms, artificial intelligence, disinformation campaigns large scale. They have a score of PhD-students and scientists doing very interesting and exciting research. Occasionally, the scientists release books, such as this one: Computational propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, edited by Samuel C. Woolley and Philip N. Howard. The book comprises case studies of digital disinformation efforts (a main focus is certain types of bots) in nine countries, ranging from Canada and Poland to Russia and, naturally, Ukraine.

    Ukraine was hit several times on a large scale, both by cyberattacks and computational propaganda. The Russians used bots of various kinds: impact and service bots, amplifiers, complainers and trackers. Research found that civil society drove the response, which was decentralized, in contrary to the centralized focus and power of the Russian attackers. Computational propaganda was used to manipulate opinion, sow discord, discredit various Ukrainian actors and support others.

    Russia is surprisingly interesting. It was, until about two weeks ago, a country where VKontakte and Yandex competed with Facebook and Google and were the bigger actors without an askewed market. But most fascinating is that the blogosphere, and parts of social media, rely on good reporting, which results in well-built fake news. In the blogosphere posts needed to have well-founded arguments and evidence “right away, preferably with detailed, often highly technical, reports on the matter”. If that failed, hackers were brought in to expose personal mails and grievances which could be exploited against journalists or the political opposition. It meant that evidence was very important. Since 2011 the situation has deteriorated though. Perhaps the abovementioned is why the Putin regime now has completely limited access to social media, to foreign sources of information, forbidden any reporting on the war, because evidence is not to be found, not to exploited by journalists or the opposition?

    As mentioned, bots are used in various ways on the Internet, and comprise a fairly large focus in several chapters, one reason being “bots […] can operate at a scale beyond humans”. In the chapter on Canada, election interference becomes an issue in the illusive question “how can free speech be weighed against foreign interference?” How can national authorities and legislation know a foreign actor isn’t buying bots to spread information in an election, or even know parties or affiliates aren’t using bots or cyborg accounts (humans and programs together) to affect the election? Julia Slupska wrote purposefully about this, discussing the fine lines of foreign interference in elections, national sovereignty, freedom of speech, the right to reflect and make choices on our own, and how liberal democracies made attempts to limit digital interference with elections. Bots complicate online speech drastically, because anyone can use bots and cyborg accounts: parties, citizens, companies, organizations. And who is to say who is a citizen, by the way, and who constitutes a foreign interest?

    Taiwan has tried media literacy as a way to counter desinformation compared to, for instance, Canada. In both countries “positive” bots are deployed to fact-check news (which, by the way, is how some journalists work, by deploying bots to check facts before publishing news).

    Zeynep Tufekci has written about activists and the same conclusions about them can be drawn here: human rights activists and alike are targeted and trolled with, especially public ones. When the Euromaiden protests broke out in 2014, activists were instantly barraged, with harassments and threats raining down on them. Fake accounts, bots and foreign interests makes it very difficult to know who exactly is behind the wall. Still, do people change their opinions, and if so, when?

    Many of the authors have interviewed people inside various companies (PR, software developers, media companies etc), which brings an interesting insight into how fake accounts are set up, bought/sold, how bot networks work, how they track and generate data on social media users, how agenda setting and opinion targeting are really working.

    Three conclusions in, and a fourth from, the book:

    1. Focus on what is said rather than who is speaking.
    2. Social media must be design for democracy.
    3. Anyone can use bots.
    4. For computational propaganda to work, it’s necessary to influence opinion leaders (on social media) and the agenda setting media. Study how Steven Bannon worked before the election to the European parliament in 2019 or watch The Brink.

    If ever you find yourself in need of a deep introduction on computational propaganda, this book is a necessity.

    Night and blur – The Bilinda Butchers