Tag: Surveillance

  • Book review: Stolen Focus

    Book review: Stolen Focus

    In 2007 Douglas Coupland released the novel jPod. During a trip to the Czech Republic I read it on behest of my girlfriend, and I utterly loved it. Five nerds in cubicles (pods), assigned to their places due to the initial J of their respective surname, in a basement of Neotronic Arts are designing the gore in video/computer games. They’re joined by a sixth member, whose surname also begins with a J and she initially thinks they’re morons. They’re all born at the end of the 1970’s and beginning of the 1980’s and their attention span at work is maximum 15 minutes long. Morally they differ from their parents, they belong to the ego of the digital age and spend lots of time not working (a Gen X trait, Coupland’s generation I dare say). Having read it thrice it remains one of the my favourite books of all time.

    Fast forward to 2008, the year we travelled to the Czech Republic, and “Twitter makes you feel that the whole world is obsessed with you and you little ego – it loves you, it hates you, it’s talking about you right now” as Johann Hari writes. For someone who’s managed Twitter, Instagram and Facebook accounts for organisations, I can only agree – it’s invasive and takes control of you. I’m happy jPod was released before social media and the new generation of smartphones wrecked the attention span and ability to focus completely.

    “How to slow down in a world that is speeding up?” Hari continues in the book Stolen Focus: Why You Can’t Pay Attention – and How to Think Deeply Again. He outlines twelve problems for our individual and collective attention spans, and ability to focus. All of them will not be covered here though. For that, you have to read the book.

    On average a person working in an office is undisturbed for approximately 2 minutes and 30 seconds. Undisturbed by others, that is. The average attention span is merely 47 seconds, because people also interrupt themselves. All the time. Meanwhile it takes 23 minutes to return to a state of focus. Meaning we basically never stay focused.

    Hari interviewed lots of people for this book, James Williams at Oxford Internet Institute being one among them. His words resound deeper than many others (and there’s tons of important words said by intelligent people in the book). We need to take on crucial issues such as climate change, but “when attention breaks down, problem-solving breaks down.” This is a hypothesis Hari clings to, and I concur: tearing attention apart means people can’t concentrate, can’t direct energy on proper things. As Hari writes, “Depth takes time. And depth takes reflection.” Mind-wandering is a state of mind people should enjoy more, but instead blocks it out more or less completely by staring at screens. Also due to the thinking that directed thoughts, meaningful thoughts and chores are good, while letting your brain do “nothing” is useless.

    To flood social media with more information is a very good way of blocking debates and conversations – it shortens the collective attention span. Add actual noise and sounds, which both deteriorate hearing capacities. Somehow we believe it’s an equilibrium: you listen to noise and sounds 50 % of the day, and you can recuperate if 50 % is quiet. But that really depends on the noise (background chatter for instance, or cars passing by), the sounds (simple, more occasional sounds) and the silence. Allowing exposure of sounds and noise for hours each day, combined with voices and music, hurts the ears and hearing. Eventually it will deteriorate by system overload. The same with your brain. It cannot evade being disturbed and deteriorates slowly, making you more stupid.

    Hari interviews Sune Lehmann, a Danish researcher on time, who exclaims that the new upperclass will be the ones with very long attention spans, always able to limit information input and aware of what they are actually doing. The rest of us will simply react to the information fed to us. We read and watch stories about people who can sleep less, eat poor and bad food, and still outperform the average person: the Bond villains and the tech prodigies. They never experience sleep deprivation, never seem to slow down. It’s the opposite of Andy Weir’s main protagonist in Project Hail Mary who states that humans become stupid when tired. We don’t comprehend that the reason behind “greatness” is mind-wandering, thoughtful discussions, promenades, information intake (and helpers, such as wives, butlers or servants): Abraham Lincoln and Theodore Roosevelt writing their speeches and pondering tough decisions, Harry S. Truman thinking through information and memos before making extremely hard decisions. We desperately need the ability to think in order to grasp and tackle climate change, artificial intelligence and other important issues, though with ruined minds and attention spans we won’t. Another quote from James Williams: “You can only find your starlight and your daylight if you have sustained periods of reflection, mind-wandering and deep thought.” 

    Lehmann reminds me of Cal Newport’s Deep Work: the future will belong to the people who can focus, who can work deep. Because Earl Miller from Massachusetts Institute of Technology says we’ve learned to compare ourselves to computer processors, machine parts with the ability to multitask, when in fact we can’t. When we try to do two or three things simultaneously, our brains are reconfiguring relentlessly. While we may believe we’re doing several things at the same time, our brains constantly start a new chore, gets interrupted by another one, stops and initiates the new chore, then gets interrupted again, stops and tries to reinitiate the first chore but actually has to restart a little bit further back than before, because of the interruption. On it goes. In some small doses it’s worse to check your Facebook feed continuously than to get stoned – and who’s allowed to get stoned at work?

    Hari continues to tackle issues such as school systems reining in our children’s abilities to learn and move (more) freely, diagnosing children with ADHD, how reading on screens is bleeding into how we read paper, and the Western world’s issues with nutrition and obesity (your tired body craves sugar and fat, which is omnipresent, we cannot evade it).

    One thing I appreciate with Hari is how he allows different arguments to meet in the book, carried by other people who oppose one another, or Hari himself. And he ends with hope, telling us about the generation his grandmothers belong to and how one of them fought for universal suffrage in Switzerland in the 1970’s. Regarding the possibility to challenge these twelve distractions, destroying our ability to focus, Hari writes:

    “No source of power, no set of ideas, is so large it can’t be challenged.”

  • Book review: Fancy Bear Goes Phishing

    Book review: Fancy Bear Goes Phishing

    As soon as I noticed a book published with this savvy title (and cover, created by Rodrigo Corral) this year, I knew I had to read it: Fancy Bear Goes Phishing: The Dark History of the Information Age, in five Extraordinary Hacks. Authored by Scott J. Shapiro, professor of law and philosophy at Yale Law School. In his youth, Shapiro spent much time with computers, but later chose a career in philosophy and law. When writing about cyberwar, he returned to computers, re-learning programming, computer science and the lingo: Evil maid attack, bald butler attack, bluesnarfing, phishing, spear phishing, whaling…

    Attempting to answer the simple questions of why the Internet is insecure, how do hackers exploit insecurity and how they can be prevented, or at least decreased in numbers, Shapiro takes us on a journey with five stops, from the late 1980’s to the hacks of the Democratic National Committee and the Minecraft wars 30 years later.

    One of Shapiro’s main arguments is the distinguishment between upcode and downcode. Upcode is the human aspect of cybersecurity, such as regulation, law, and organizational norms, whereas downcode is the technical programming and operating of programs, operative systems and alike. His consistent argument is that upcode regulates downcode. Thus, he opposes solutionism, the view that “technology can and will solve our social problems”. I’ve written about the tech elite earlier in 2023, their engineering-like focus on all issues, they being able to solve everything with math and algorithms, as if reality can be reduced to technicalities. Shapiro continues, with his fantastic sense of humour: “Great news! We can reverse centuries of imperialism, revolution, and poverty with our cell phones.” This connects to Bruce Schneier’s angle on cybersecurity too: focus on the humans primarily.

    Another sentence deeply related to Cathy O’Neil is “Most problems do not have solutions that are reducible to finite procedures.” Solutionism cannot succeed, because it relies on (Alan) Turing’s physicality principle: changes in the digital realm presupposes changes in the physical realm, which means computation, when all is said and done, is a physical process, and relies on control over the physical world, such as cables, servers, and routers.

    The almost inherent insecurity of the Internet of Things (IoT) is quite obvious, another connection to Schneier, who claims the same thing. IoT-devices have very rudimentary operating systems, meaning they’re usually really poorly designed. They have a singular, or few, purposes, rendering them with attack vectors. So, your refrigator might be part of a zombie-net controlled by some angry teenager playing Minecraft, using your very refrigator attacking another server running Minecraft.

    Solutionism dominates so much, represented by ignoration and non-comprehension among programmers and computer scientists, disguised as the common resentment and claims that politics is unfit to kepp up with things technical. The sentiment of solutionism Shapiro compresses in one sentence:

    “Politics becomes engineering; moral reasoning becomes software development.”

    Cybersecurity – it’s a human thing

    Shapiro connects law and legal discussions in the cases the tells. What are the implications judiciously for the hackers, how does the hackers think, and the legal system perceive these acts. In cases where the perpetrator is sentenced, how does the legal system reason?

    I appreciate how he considers gaming and programming culture as overtly (white) male, rendering women targets usually for misogynic hatred, or at least suspicious activites by men against women (and other gender identities, might I add). This touched briefly on the deeply ingrained meritocratic aspects of programming/hacking culture, as covered by Gabriella Coleman in Coding Freedom: The Ethics and Aesthetics of Hacking.

    Shapiro also provides us with the combination of basic computers science terms and programming functions, such as the difference between data and code, and how operating systems work. If you don’t understand how very rudimentary programming functions, Shapiro will inform you how it actually works to prove his points, and easen the complexities of cyberspace somewhat. Knowledge will calm you more than ignorance, he reasons, and I concur.

    Mainly he presents various ways hackers exploit humans via their cognition: visuality, irrationality, probability, and time. Hackers are great cognitions and really social beings, at least virtually, and comprehend how some people will be fooled.

    The sense of humour!

    Regarding the oh, so common Nigerian prince/general/rich person mail, Shapiro regularly depicts issues and technicalities through diagrams or pictures, and provides proper examples the reader can understand, such as:

    “This Nigerian Astronaut pushes this internet scam to eleven.”

    Anyone who comprehends this sentence, will enjoy reading a serious book on a serious subject.

    It goes up to eleven

    Of all the books on technology I’ve read, this is the best one. Were I to give people a recommendation on one single book they could read to better grasp the cyber realm, Fancy Bear Goes Phishing it is.

  • Book review: Weapons of math destruction

    Book review: Weapons of math destruction

    This is a mandatory book during a course on democracy, that I actually read approximately three years ago and thus never reviewed (this website didn’t exist then), so I thought it was time for a proper review.

    Cathy O’Neil is a computer scientist and mathematician, who left the academic life for the financial industry in the early 2000’s, working with computers, for companies making lots of money. There she discovered what is now called Big Data and later became troubled by the purposes and intents of algorithms. After realising the even more troublesome side effects on society, she thus wrote this book, with the secondary title How Big Data increases inequality and threatens democracy.

    Through ten chapters, O’Neil takes the reader through what a data model is and how it can affect people in real life, such as the effects of university ranking models and the possibility of getting an adequate education, evaluations of teachers, online advertising, criminal injustice and justice and getting insurance, among other things. How come a data model deems a teacher unsuccesful or a job applicant unfit? Is the model correctly constructed or does it inherit its lack of perspective, and mathematical incoherence, from the creator? Data models with destructive effects on people’s lives are what she calls weapons of math destruction, WMD.

    In large, I agree with her and appreciate her arguments and conclusions. Negative feedback loops can infer that black men are more prone to commit crimes because the police has indicated black neighbourhoods as more exposed to petty crimes, sending police patrols to these neighbourhoods rather than white communities with more hidden crimes not marked on a map. This kind of feedback loop creates or maintains inequalities, which have destructive consequences for society.

    Sometimes, though, she contradicts herself. The extremes in statistical data are more likely to be pointed out and punished, she writes, although she also writes (rightly) that black men become an average in criminal statistics, simply being the median and mean, rather than the extreme. In a black community with more black men than white men, black men are the average. In a sense, being an average person, financially for instance, in a big data model can be very punishing, while being an extreme in form of extremely rich is better.

    On average (huh!) though, this book is still highly relevant, even though we’ve moved into the “age of AI”. AI-programmes rely on the same errors and statistical inferences as the programmes O’Neil discusses. Personally, I think the book is good for social scientists. She presents statistical models used by scientists and businesses, and how easily they can turn into stupid models discriminating people. It’s nice to get a mathematicians perspective and logical thinking.

    Conclusion: It still stands. Brief as that.

  • The debate on refugee espionage

    Refugee espionage, according to Swedish law, is when a person unlawfully, secretly and systematically, over time, gathers information about someone else in order to provide a foreign power this information. It’s been part of Swedish law since the 1940’s and Sweden is one of the few countries to actually prohibit this action.

    There’s research on transnational repression and digital transnational repression, for instance by The Citizen Lab, Marcus Michaelsen, and Siena Anstis and Sophie Barnett. Authoritarian countries spend resources and time to repress diasporas, dissidents and vocal ex-citizens, whether by physically collecting information and threatening them, or by using the Internet.

    How does the Swedish parliament and media debate refugee espionage since 2014, when the law was revised? Does the debate connect refugee espionage to the digital ways of surveilling and repressing people? What does it say about national security and Swedish sovereignty?

    This is my bachelor’s thesis in political science. You can find it here, although it’s only available in Swedish.

  • Book review: How to lose the information war

    Book review: How to lose the information war

    I first noticed Nina Jankowicz while reading the report Malign Creativity: How Gender, Sex, and Lies are Weaponized against Women Online. However, I didn’t know Nina was specialized in Central and Eastern Europe, that she has been stationed in Ukraina and knows Russian (thus also being able to understand Polish, Czech and Slovak). Her second book is focused on that same geographical region and, as the title implies, information warfare, directed by Russia. But she weaves the information war of the Czech Republic, Estonia, Georgia, Poland and Ukraine with that of the US, and concentrates on the way to loose information war, but also how to try and tackle it.

    “With the advent of the internet and social media, individual citizens are now ‘news’ outlets themselves.” This fact countries like Russia uses against democracies in order to spread false narratives. In the introduction Nina gives us a more thorough dive into The Mueller Report about Russia’s interference prior under during to the presidental election of 2016. It was far more insidious and elaborate than arranging one protest and counterprotest at the same time and location. The Internet Research Agency (IRA) managed to run popular Facebook pages like Blacktivist and Being Patriot, as well as arrange unseemlingly fun and popular protests in Washington D.C.

    Nina takes us to five countries that in different ways have tried, and are trying, to fight against Russian information warfare: the Czech Republic, Estonia, Georgia, Poland and Ukraine. In discussions with government officials, politicians and alternative media, she paints a picture of the different ways these countries try to combat Russian interference and pressure. These could provie the US with lessons on how to lose the information war.

    The lesson of lessons

    When it’s in front of you, it’s completely obvious. You ask yourself why you never saw it or verbally was able to say it out loud. Nina does just this. In the chapter of Estonia, she delves into the issue of the Russian minority, how it’s discriminated against and can’t be part of the Estonian society. This Russia uses to its advantage, to cast doubt on the Estonian government and majority. How to solve?

    Whenever we discuss issues related to technology, we tend to see technical solutions. Probably because the tech industry wants it no other way. Probably because we are entranced by technology, living in a technoreligious society, believing in technology as a good force in itself. So, why not simply throw in a tech solution to a tech problem? Like she writes: “How can any administration that intends to protect free speech censor the authentic opinions of its own citizens?”

    Why not solve this societal issue with a societal solution instead? Simply put: restore trust in government, give the minority chances to become part of the society as a whole. Try not to evoke bad feelings and animosity between people, heal the rifts. Two important pillars of media literacy (that Taiwan has tried) are schools, as in Finland, and public libraries and the powerful information and searchability librarians hold to guide citizens in the endless stream of information and literature. Thus Russia can no longer use this issue to splinter relations between people and create even bigger rifts. Because one thing Russia does is never to invent new issues, but use the old societal problems to sow discord and splinter society and the nation.

    Downsides

    Four downsides with the book:

    It was published just after Joe Biden was installed as president of the Unites States, thus missing the Biden administration’s take on cyber warfare, dual-use technologies, spyware and transnational repression. It differs from previous administrations.

    It was published one year before the Russian war against Ukraina in 2022, which renders some of the politics described obsolete. For instance, Estonia has once more turned more suspicious of the Russian minority, meaning that, for instance, the chapter on Estonia is not up to date, although it’s still relevant as a historical lesson. Settings for information warfare have changed rather drastically in one year.

    Somehow, I really dislike fictional writings “capturing” a technology and its implication in the present or future. Carissa Véliz does it in Privacy is power. Nina does it, and it’s erroneous, partly because it’s written before Biden’s presidency, partly because it’s the usual bleak, dry, predictable onset to an issue now, set in 2028.

    In the chapter about Ukrainian efforts to provide positive aspects of Ukraine in the Dutch election about EU-legislation should have been problematized more. Even though the Russians seemed to have played a part in negative campaigning, the Ukrainian part could also be considered foreign interference in an election. Julia Slupska’s piece on election interference is well-worth a read.

    Summary

    The book is true to its’ title. Information warfare pervades the book, and it doesn’t confuse information warfare with espionage or cyber warfare. Terms here are very important and so are the differences between them. Although Russia is the focal point, which narrows the scope of information warfare, that’s an advantage here. To write about information warfare in general or include Chinese, Iranian, American or any other country, would water it down. One can’t cover everything to make a topic or an issue interesting.

    Lessons from the book are important and relevant. Countries must learn from one another, can’t hide from information warfare, and develop a battery of counter measures. And those counter measures are seldom technological, but rather societal, economical and political. That’s the most important things I learned reading this book.

  • Thesis proceeding

    The snow is still covering parts of the ground and I’m writing the introduction, purpose and research questions on my bachelor’s thesis in political science. If all goes according to plan, it’ll be complete and presented to the examiner and supervisor in late May and in May-June it’ll be publicly discussed and examined.

    My only obstacle at the moment is the lack of research on digital transnational repression in Sweden and Scandinavia. I have ambiguous feelings about there not being much research, because on the one hand, it feels great to be one of the first students (in Sweden) to write about DTR and refugee espionage, but on the other hand, it’s also rather uncomfortable being one of the very first. The phenomenon needs to be introduced in a careful and simple, rather effortless, way, which is much more difficult than it may seem.

    Two of the articles I refer to and base my own thesis on are Drawing a line: Digital transnational repression against political exiles and host state sovereignty, and Digital Transnational Repression and Host States’ Obligation to Protect Against Human Rights Abuses. In different ways they highlight the obligations of the host state, and the vulnerability of the host state if it seems to lack capacity to protect its’ inhabitants. Too little has been researched here when researchers have focused on human rights and freedom. It’s not bad, but the phenomenon, I think, needs to be perceived as more than simply an abuse of human rights. It’ll never be enough to highlight one dimension of this form of repression.

    The Citizen Lab released their splendid report “Psychological and Emotional Warfare: Digital Transnational Repression in Canada” one year ago, which comprises interviews with people residing in Canada whom been targeted with various forms of DTR. If you’re looking for definitions and concepts, and insights to how it’s like living under digital surveillance and threats, the report is really useful.

  • Book review: Click here to kill everybody

    Book review: Click here to kill everybody

    For those who don’t know of Bruce Schneier, he’s one of the world’s most famous and prominent cybersecurity experts. If there’s one person you’d like to guide you and hold your hand while in need, Schneier is the one. This book is about basics of cybersecurity, not the technical aspects, but rather about security on the Internet and the Internet+, the interconnected world of the Internet of things.

    Driverless cars, thermostats, drones, locks on doors, baby dolls and monitors, and pacemakers are interconnected – without any concern for security. Virtually all companies except for Apple and Microsoft sell inadequate and incomplete consumer products without testing, whereas in the the airplane industry a line of code can cost millions of dollars and pass through very rigorous testing before being applied in reality.

    “Click here to kill everybody” is a thorough and deep book about how this neglect of cybersecurity has consequences for people, society, companies and governments/authorities. It depends on rushed incentives and meddling from many governments.

    I love the metaphor “The Four Horsemen of the Internet Apocalypse – terrorists, drug dealers, pedophiles, and organized crime” that states and companies use to frighten people. If we standardize encryption in texting, telephone calls, files on your phone, the dark sides will become even stronger and the good forces will fail at catching and prosecuting villains (is the usual comments). The paradox is that states use front companies to do some of these works as well, like North Korea and organized crime and drugs. Even China (companies connected to the People’s Liberation Army), Russia (Internet Research Agency, under the now-well-known-name Yevgeny Prigozhin) and the US (the military-industrial complext and NSA-connected entrepreneurs) are all engaging companies to do their bidding, no strings attached.

    The situation we’re in: From bad to worse

    An entire chapter is named “Everyone favors insecurity”, a telling title. What it basically comes down to, is that companies are unwilling to pay for security, very much like ecofriendly products are more expensive, because taking ecological consideration into account costs more than not caring. Apple and Microsoft are two of the very few companies that actually pay attention to security, making sure that products are released when they’re as secure as possible. Most companies follow the former Facebook motto “Move fast and break things” and release rather delay and miss the launch.

    What people, and companies and authorities, then miss is the fact that our overall security is decreased, in peril, simply because it’s considered too expensive or too troublesome.

    Security should default, like encryption should be default, not optional or thought of in clear hindsight. When products are ready for sale, they should be as complete as possible. The ideal of move fast and break things should be abolished.

    Regulation

    Authorities need more transparency, less secrecy, more oversight and accountability, Schneier argues (and he isn’t alone). FBI, NSA and others don’t want encryption and want backdoors. This is completely contradictory security-wise. If the population is being preyed upon, if rogue elements can infect and steal from people, companies and authorities will also be easier targets. The more people who risk being infected and preyed upon, the more who will be in peril. Less security for civil society and people means states are less secure, although authorities want to weaken encryption, install backdoors – everyone gains access to damage, everyone looses.

    An argument often lost in the debate on regulation is that losing parties in this debate of regulation are small companies without assets or time on their side, and favour big corporations, who can much easier adapt. Big corporations are also prone to being in the attention span of the regulators and tended too, whereas smaller companies are seldom even seen, mostly overlooked. I think this is one of the most important aspects of the entire book.

    Another issue with regulation is its tendency to focus on particular technologies. Schneier’s suggestions is to “focus on human aspects of the law” instead of technologies, apps, or functions. Also, it’s better to aim for a result and let experts work to achieve that result rather than, again, focus on a specific technology.

    Summary

    Rights of the computers scientists / software developers / programmers are still very strong and they can develop pretty much what they want. We’re too short-sighted and can’t, or refuse to, see possible outcomes and changes from longer perspectives. “We accepted it because what they decided didn’t matter very much. Now it very much matters, and I think this privilege needs to end.” Just because products are digital doesn’t mean they have more right to exist, and living in a society where technology has become some kind of religious belief doesn’t mean technology is impervious to critic or bad things.

    Schneier argues that only states should have the capability to confront cyber attacks, not companies or other organizations. Considering they industry of spyware (or mercenary spyware as it’s called) I concur, though companies can help being part of cyber defense.

    One of Schneier’s guesses is that the security issues with “Internet+ will creep into their networks” in unexpected ways. Someone brings a device to work, which connects to the Internet and starts to leak data. Suddenly a company or authority realizes it has serious issues with real life implications.

    If you need a basic book about cybersecurity, without any technical details or prerequisites, this is a book for you. It’ll teach you what cybersecurity is about.

  • Time to decide again

    It’s been two years and finally it’s time to study some more. In roughly one month, we’ll begin writing our bachelor thesis. Mine will (unless some pivotal change occurs) be about digital transnational repression in Sweden. There’s isn’t much research on this issue regarding Sweden. There’s scant research internationally too, except for Freedom House, The Citizen Lab and a few researchers specialized in the field, like Marcus Michaelsen. I’m about to dive into their research more thoroughly, choose my material wisely and formulate questions.

  • Book review: Reset

    Book review: Reset

    “We can reclaim the internet for civil society. The principle of restraint should be our guide.”

    The end.

    Basically, I could stop here and write no more. These are the last two sentences of the book Reset: Reclaiming the Internet for Civil Society of Ronald J. Deibert and the profound solution to problems with internet, social media, tech companies, surveillance, espionage, cyberwars, is about.

    Deibert founded The Citizen Lab in 2001 as a research group with a mission he had came up with: the dirty backside of the Internet. For once I read a boo by an author who doesn’t need to restort to the creepiest descriptions and depictions on what could happen – he controls this subject totally. You can read it between the lines, see it in the examples given, often from the research of the Citizen Lab and from various other sources, not the usual ones, he doesn’t inundate you with details and a massive amount of examples about everything that is inherently wrong with internet (for that, listen to Grumpy old geeks), and because of the chapters he’s chosen to focus on. He knows this stuff without the restless need to show how well he has (begun) to master this subject after a couple of years. The combination of chapters are his strength.

    Causes

    Surveillance capitalism as a concept is the first subject Deibert touches, writing about the omnipresent tech in our lives, the gadgets we surround ourselves with day and night, for most reasons. This has been covered by Carissa Veliz, Adam Alter (review coming) and (obviously) Shushana Zuboff (review coming), to name a few. Deibert writes about different absurd apps, ideas to capture more personal data and dangerous paths taken by companies, paths that can easily lead to authoritarian perspectives on society and societal change.

    How our addictive machines are used to spread propaganda, disinformation, misinformation, to destabilize societies, divide and rule among foreign adversaries is another bleak chapter. Companies, state actors, organisations are playing a very perilous game with democratic states and risking all progress on human rights. Insititutions are seemlingly falling apart, or at least being unable to thwart a slide towards more fragile societies.

    Thirdly, intrusive powers is about how technology is used to circumvent human rights and deliberation by (nation) states. Abuses of power become harder to track, inhibit and hold accountable. Technology is more often used to suppress minorities and people rather than elevate them.

    Aspects of climate and environment are usually completely excluded from books written by tech-related authors. The link to the natural world is many times exempt from being questioned. Two of the few eexceptions are Kate Crawford and Tung-Hui Hu, both of whom I’ll cover in time.

    I worked in politics for almost seven years and I concur with Deibert that “material factors play a major role in shaping political outcomes”, must be taken into account and politics should, at times, adapt to societal changes rather than neglecting them. Sometimes you simply follow, not lead. And tech is very much physical, material.

    No other expert, that I have encountered, has been able to combine all these issues and subjects into one coherent text about the state of the internet and democracy. A fellow Canadian and political scientist at that, Taylor Owen (yes, listen to his podcast), is the closest one we’ve got.

    Solutions

    Deibert’s a political scientist at heart, although you might think (or decieve yourself) he’s a computer scientist, and it shows when he delves into solutions. He presents the ideas and theory of republicanism, the theory “to tie down and restrain the exercise of power not only domestically, but also across borders.” Politics usually move rather slowly in democratic states and rightfully so, argues Deibert and the republicans, because decisions should take time and deliberation is necessary so as not to react emotionally or irrationally due to some fancy. Deliberation has become a word with negative connotations. Things should be decided quickly, without thoughtful processes, almost impulsively. Deibert argues that deliberation offers restraint, inhibits decisions to be simple (and often stupid) reactions to very contemporary issues. As such, restraint should be exhibited much more, in social media, in politics, on technologically related decisions. Deliberation should be a guideline, not an insult.

    At first my thoughts were similar to my reading of Beijmo’s De kan inte stoppa oss – basically, we’re f*cked. After a while I actually feel hope. For once, here’s a person with vast experience and knowledge of how bad things have turned for more than two decades, who can show us real adequate and suitable actions on a systemic level. Here are no individual recommendations on “block cookies”, “encrypt all your communications” or “refuse to use social media”. Deibert has spent more time than most humans on these issues, so what he writes is very much what we should do. We should move slower, more deliberately, in order to reclaim internet for civil society, not for states or companies.

    Conclusion

    If there’s one book to rule them all, this is the one.

  • Dead soldiers in Clearview AI (Revised June 15th)

    Dead soldiers in Clearview AI (Revised June 15th)

    The war between Russia and Ukraine rages on. One method for the Ukrainian resistance to raise awareness of the number of dead Russian (and Ukrainian) soldiers is to use Clearview AI, the facial network services company, which can detect faces and connect them to, for instance, social media profiles. It’s also a method for the Ukrainian Ministry for Digital Transformation and five other Ukrainian agencies to detect dead soldiers scattered on and around battlefields.

    On January 6th 2021, two weeks before the inauguration of Joe Biden as president, we could witness the attack on the Capitol Hill in Washington D.C. Afterwards, authorities could tap into the network services of Clearview AI and, quite easily, detect hundreds of participants in these illegal activities. Many of them have been prosecuted and some sentenced to jail. Clearview AI has amassed billions of photos on the public Internet for years, rendering them extremely able to pinpoint human beings if you have a Clearview AI account. The image I have of you will be matched against this gigantic image database and probably tell me it is you, even if we haven’t met for years (or ever).

    The podcast Click Here has a good episode on this and how it’s used in Ukraine. On the one hand employees of the Ministry of Digital Transformation use proper Clearview AI accounts, thus being able to match most images of dead soldiers with real people, even if years have passed, the deceased have no eyes and parts of the faces are distorted. They inform both Ukrainian and Russian relatives and tell them where to retrieve the body.

    More problematic is the fact that groups affiliated with the Ukrainian IT Army appear to use an account too, also informing Russian relatives, though in a(n) (even) more condescending and hostile way. Russian relatives are probably feeling neither gratitude, nor appreciation for suddenly receiving images of dead bodies, especially with gloating or condescending messages.

    Even if I remain a skeptic, there are some reasons for using this kind of technology.

    1. War is gruesome and disgusting. People die and preferably they should be identified. Computers and programs can help here and make this much easier and faster than humans.
    2. War crimes are committed and should be investigated. Technology can help here too.
    3. Russian authorities are not the ones to inform relatives that sons have died in accidents, wars or “special military operations”. They can lie and this is where technology can help tell otherwise.
    4. Identification of people is not dependant on favourable relations with another nation’s authorities. Identification can be made without another nation’s consent, because their citizens are in databases elsewhere anyway.

    There are more cons, however, some really strong.

    1. These databases will be targeted by states, state-sponsored organizations, rogue organizations and individuals.
    2. States will strive to acquire similar databases in order to identify anyone anytime anywhere.
    3. To presume that Russian relatives will feel anger at their government and/or gratitude towards Ukrainians for sending images of their dead ones is really bad. Rather, it can galvanize public support for Russian authorities.
    4. The hope for grieving mothers’ movements to direct their anger at the Russian regime is likewise bad. Why should they, especially if there’s anonymous messages from foreigners telling them they are blind to facts and supporting an evil leader?
    5. Disinformation warfare 1 – whom to believe? A random person from another country claiming my relative is dead or the national authorities?
    6. Disinformation warfare 2 – I can assert you to be a traitor and use this tool to prove it.
    7. Disinformation warfare 3 – can “photoshopped” images be run in Clearview AI?
    8. Disinformation warfare 4 – this kind of technology can trigger an even worse response and method of war, spiralling further down.
    9. Misidentification of individuals happen in every other computer system, so why shouldn’t it happen with Clearview AI.
    10. Gathering of images is done without consent or information and for how long will they be kept?

    Similar systems in use today are the combination of Sky Net and Integrated Joint Operations Platform in China. They are very creepy and should probably be banned altogether, because the more of this technology there is, the more it will be used. Based on a decision in May, Clearview AI is no longer allowed to sell its database to private businesses in the US and to Illinois state agaencies (for five years in the latter case). At this point, the database comprises 20 billion facial photos.

    But. After all, it’s rather easy to stay emotionally detached if you’re not in Ukraine, living your life, albeit with inflation and a shaky economy. Still, the war is far away and it’s easy to say this use, weaponized use, of images is wrong. But in a different situation, with war, death, fear and suffering around me, I’d probably be doing it myself.