Tag: Personal data

  • Book review: Stolen Focus

    Book review: Stolen Focus

    In 2007 Douglas Coupland released the novel jPod. During a trip to the Czech Republic I read it on behest of my girlfriend, and I utterly loved it. Five nerds in cubicles (pods), assigned to their places due to the initial J of their respective surname, in a basement of Neotronic Arts are designing the gore in video/computer games. They’re joined by a sixth member, whose surname also begins with a J and she initially thinks they’re morons. They’re all born at the end of the 1970’s and beginning of the 1980’s and their attention span at work is maximum 15 minutes long. Morally they differ from their parents, they belong to the ego of the digital age and spend lots of time not working (a Gen X trait, Coupland’s generation I dare say). Having read it thrice it remains one of the my favourite books of all time.

    Fast forward to 2008, the year we travelled to the Czech Republic, and “Twitter makes you feel that the whole world is obsessed with you and you little ego – it loves you, it hates you, it’s talking about you right now” as Johann Hari writes. For someone who’s managed Twitter, Instagram and Facebook accounts for organisations, I can only agree – it’s invasive and takes control of you. I’m happy jPod was released before social media and the new generation of smartphones wrecked the attention span and ability to focus completely.

    “How to slow down in a world that is speeding up?” Hari continues in the book Stolen Focus: Why You Can’t Pay Attention – and How to Think Deeply Again. He outlines twelve problems for our individual and collective attention spans, and ability to focus. All of them will not be covered here though. For that, you have to read the book.

    On average a person working in an office is undisturbed for approximately 2 minutes and 30 seconds. Undisturbed by others, that is. The average attention span is merely 47 seconds, because people also interrupt themselves. All the time. Meanwhile it takes 23 minutes to return to a state of focus. Meaning we basically never stay focused.

    Hari interviewed lots of people for this book, James Williams at Oxford Internet Institute being one among them. His words resound deeper than many others (and there’s tons of important words said by intelligent people in the book). We need to take on crucial issues such as climate change, but “when attention breaks down, problem-solving breaks down.” This is a hypothesis Hari clings to, and I concur: tearing attention apart means people can’t concentrate, can’t direct energy on proper things. As Hari writes, “Depth takes time. And depth takes reflection.” Mind-wandering is a state of mind people should enjoy more, but instead blocks it out more or less completely by staring at screens. Also due to the thinking that directed thoughts, meaningful thoughts and chores are good, while letting your brain do “nothing” is useless.

    To flood social media with more information is a very good way of blocking debates and conversations – it shortens the collective attention span. Add actual noise and sounds, which both deteriorate hearing capacities. Somehow we believe it’s an equilibrium: you listen to noise and sounds 50 % of the day, and you can recuperate if 50 % is quiet. But that really depends on the noise (background chatter for instance, or cars passing by), the sounds (simple, more occasional sounds) and the silence. Allowing exposure of sounds and noise for hours each day, combined with voices and music, hurts the ears and hearing. Eventually it will deteriorate by system overload. The same with your brain. It cannot evade being disturbed and deteriorates slowly, making you more stupid.

    Hari interviews Sune Lehmann, a Danish researcher on time, who exclaims that the new upperclass will be the ones with very long attention spans, always able to limit information input and aware of what they are actually doing. The rest of us will simply react to the information fed to us. We read and watch stories about people who can sleep less, eat poor and bad food, and still outperform the average person: the Bond villains and the tech prodigies. They never experience sleep deprivation, never seem to slow down. It’s the opposite of Andy Weir’s main protagonist in Project Hail Mary who states that humans become stupid when tired. We don’t comprehend that the reason behind “greatness” is mind-wandering, thoughtful discussions, promenades, information intake (and helpers, such as wives, butlers or servants): Abraham Lincoln and Theodore Roosevelt writing their speeches and pondering tough decisions, Harry S. Truman thinking through information and memos before making extremely hard decisions. We desperately need the ability to think in order to grasp and tackle climate change, artificial intelligence and other important issues, though with ruined minds and attention spans we won’t. Another quote from James Williams: “You can only find your starlight and your daylight if you have sustained periods of reflection, mind-wandering and deep thought.” 

    Lehmann reminds me of Cal Newport’s Deep Work: the future will belong to the people who can focus, who can work deep. Because Earl Miller from Massachusetts Institute of Technology says we’ve learned to compare ourselves to computer processors, machine parts with the ability to multitask, when in fact we can’t. When we try to do two or three things simultaneously, our brains are reconfiguring relentlessly. While we may believe we’re doing several things at the same time, our brains constantly start a new chore, gets interrupted by another one, stops and initiates the new chore, then gets interrupted again, stops and tries to reinitiate the first chore but actually has to restart a little bit further back than before, because of the interruption. On it goes. In some small doses it’s worse to check your Facebook feed continuously than to get stoned – and who’s allowed to get stoned at work?

    Hari continues to tackle issues such as school systems reining in our children’s abilities to learn and move (more) freely, diagnosing children with ADHD, how reading on screens is bleeding into how we read paper, and the Western world’s issues with nutrition and obesity (your tired body craves sugar and fat, which is omnipresent, we cannot evade it).

    One thing I appreciate with Hari is how he allows different arguments to meet in the book, carried by other people who oppose one another, or Hari himself. And he ends with hope, telling us about the generation his grandmothers belong to and how one of them fought for universal suffrage in Switzerland in the 1970’s. Regarding the possibility to challenge these twelve distractions, destroying our ability to focus, Hari writes:

    “No source of power, no set of ideas, is so large it can’t be challenged.”

  • Book review: Fancy Bear Goes Phishing

    Book review: Fancy Bear Goes Phishing

    As soon as I noticed a book published with this savvy title (and cover, created by Rodrigo Corral) this year, I knew I had to read it: Fancy Bear Goes Phishing: The Dark History of the Information Age, in five Extraordinary Hacks. Authored by Scott J. Shapiro, professor of law and philosophy at Yale Law School. In his youth, Shapiro spent much time with computers, but later chose a career in philosophy and law. When writing about cyberwar, he returned to computers, re-learning programming, computer science and the lingo: Evil maid attack, bald butler attack, bluesnarfing, phishing, spear phishing, whaling…

    Attempting to answer the simple questions of why the Internet is insecure, how do hackers exploit insecurity and how they can be prevented, or at least decreased in numbers, Shapiro takes us on a journey with five stops, from the late 1980’s to the hacks of the Democratic National Committee and the Minecraft wars 30 years later.

    One of Shapiro’s main arguments is the distinguishment between upcode and downcode. Upcode is the human aspect of cybersecurity, such as regulation, law, and organizational norms, whereas downcode is the technical programming and operating of programs, operative systems and alike. His consistent argument is that upcode regulates downcode. Thus, he opposes solutionism, the view that “technology can and will solve our social problems”. I’ve written about the tech elite earlier in 2023, their engineering-like focus on all issues, they being able to solve everything with math and algorithms, as if reality can be reduced to technicalities. Shapiro continues, with his fantastic sense of humour: “Great news! We can reverse centuries of imperialism, revolution, and poverty with our cell phones.” This connects to Bruce Schneier’s angle on cybersecurity too: focus on the humans primarily.

    Another sentence deeply related to Cathy O’Neil is “Most problems do not have solutions that are reducible to finite procedures.” Solutionism cannot succeed, because it relies on (Alan) Turing’s physicality principle: changes in the digital realm presupposes changes in the physical realm, which means computation, when all is said and done, is a physical process, and relies on control over the physical world, such as cables, servers, and routers.

    The almost inherent insecurity of the Internet of Things (IoT) is quite obvious, another connection to Schneier, who claims the same thing. IoT-devices have very rudimentary operating systems, meaning they’re usually really poorly designed. They have a singular, or few, purposes, rendering them with attack vectors. So, your refrigator might be part of a zombie-net controlled by some angry teenager playing Minecraft, using your very refrigator attacking another server running Minecraft.

    Solutionism dominates so much, represented by ignoration and non-comprehension among programmers and computer scientists, disguised as the common resentment and claims that politics is unfit to kepp up with things technical. The sentiment of solutionism Shapiro compresses in one sentence:

    “Politics becomes engineering; moral reasoning becomes software development.”

    Cybersecurity – it’s a human thing

    Shapiro connects law and legal discussions in the cases the tells. What are the implications judiciously for the hackers, how does the hackers think, and the legal system perceive these acts. In cases where the perpetrator is sentenced, how does the legal system reason?

    I appreciate how he considers gaming and programming culture as overtly (white) male, rendering women targets usually for misogynic hatred, or at least suspicious activites by men against women (and other gender identities, might I add). This touched briefly on the deeply ingrained meritocratic aspects of programming/hacking culture, as covered by Gabriella Coleman in Coding Freedom: The Ethics and Aesthetics of Hacking.

    Shapiro also provides us with the combination of basic computers science terms and programming functions, such as the difference between data and code, and how operating systems work. If you don’t understand how very rudimentary programming functions, Shapiro will inform you how it actually works to prove his points, and easen the complexities of cyberspace somewhat. Knowledge will calm you more than ignorance, he reasons, and I concur.

    Mainly he presents various ways hackers exploit humans via their cognition: visuality, irrationality, probability, and time. Hackers are great cognitions and really social beings, at least virtually, and comprehend how some people will be fooled.

    The sense of humour!

    Regarding the oh, so common Nigerian prince/general/rich person mail, Shapiro regularly depicts issues and technicalities through diagrams or pictures, and provides proper examples the reader can understand, such as:

    “This Nigerian Astronaut pushes this internet scam to eleven.”

    Anyone who comprehends this sentence, will enjoy reading a serious book on a serious subject.

    It goes up to eleven

    Of all the books on technology I’ve read, this is the best one. Were I to give people a recommendation on one single book they could read to better grasp the cyber realm, Fancy Bear Goes Phishing it is.

  • Book review: Weapons of math destruction

    Book review: Weapons of math destruction

    This is a mandatory book during a course on democracy, that I actually read approximately three years ago and thus never reviewed (this website didn’t exist then), so I thought it was time for a proper review.

    Cathy O’Neil is a computer scientist and mathematician, who left the academic life for the financial industry in the early 2000’s, working with computers, for companies making lots of money. There she discovered what is now called Big Data and later became troubled by the purposes and intents of algorithms. After realising the even more troublesome side effects on society, she thus wrote this book, with the secondary title How Big Data increases inequality and threatens democracy.

    Through ten chapters, O’Neil takes the reader through what a data model is and how it can affect people in real life, such as the effects of university ranking models and the possibility of getting an adequate education, evaluations of teachers, online advertising, criminal injustice and justice and getting insurance, among other things. How come a data model deems a teacher unsuccesful or a job applicant unfit? Is the model correctly constructed or does it inherit its lack of perspective, and mathematical incoherence, from the creator? Data models with destructive effects on people’s lives are what she calls weapons of math destruction, WMD.

    In large, I agree with her and appreciate her arguments and conclusions. Negative feedback loops can infer that black men are more prone to commit crimes because the police has indicated black neighbourhoods as more exposed to petty crimes, sending police patrols to these neighbourhoods rather than white communities with more hidden crimes not marked on a map. This kind of feedback loop creates or maintains inequalities, which have destructive consequences for society.

    Sometimes, though, she contradicts herself. The extremes in statistical data are more likely to be pointed out and punished, she writes, although she also writes (rightly) that black men become an average in criminal statistics, simply being the median and mean, rather than the extreme. In a black community with more black men than white men, black men are the average. In a sense, being an average person, financially for instance, in a big data model can be very punishing, while being an extreme in form of extremely rich is better.

    On average (huh!) though, this book is still highly relevant, even though we’ve moved into the “age of AI”. AI-programmes rely on the same errors and statistical inferences as the programmes O’Neil discusses. Personally, I think the book is good for social scientists. She presents statistical models used by scientists and businesses, and how easily they can turn into stupid models discriminating people. It’s nice to get a mathematicians perspective and logical thinking.

    Conclusion: It still stands. Brief as that.

  • Book review: Click here to kill everybody

    Book review: Click here to kill everybody

    For those who don’t know of Bruce Schneier, he’s one of the world’s most famous and prominent cybersecurity experts. If there’s one person you’d like to guide you and hold your hand while in need, Schneier is the one. This book is about basics of cybersecurity, not the technical aspects, but rather about security on the Internet and the Internet+, the interconnected world of the Internet of things.

    Driverless cars, thermostats, drones, locks on doors, baby dolls and monitors, and pacemakers are interconnected – without any concern for security. Virtually all companies except for Apple and Microsoft sell inadequate and incomplete consumer products without testing, whereas in the the airplane industry a line of code can cost millions of dollars and pass through very rigorous testing before being applied in reality.

    “Click here to kill everybody” is a thorough and deep book about how this neglect of cybersecurity has consequences for people, society, companies and governments/authorities. It depends on rushed incentives and meddling from many governments.

    I love the metaphor “The Four Horsemen of the Internet Apocalypse – terrorists, drug dealers, pedophiles, and organized crime” that states and companies use to frighten people. If we standardize encryption in texting, telephone calls, files on your phone, the dark sides will become even stronger and the good forces will fail at catching and prosecuting villains (is the usual comments). The paradox is that states use front companies to do some of these works as well, like North Korea and organized crime and drugs. Even China (companies connected to the People’s Liberation Army), Russia (Internet Research Agency, under the now-well-known-name Yevgeny Prigozhin) and the US (the military-industrial complext and NSA-connected entrepreneurs) are all engaging companies to do their bidding, no strings attached.

    The situation we’re in: From bad to worse

    An entire chapter is named “Everyone favors insecurity”, a telling title. What it basically comes down to, is that companies are unwilling to pay for security, very much like ecofriendly products are more expensive, because taking ecological consideration into account costs more than not caring. Apple and Microsoft are two of the very few companies that actually pay attention to security, making sure that products are released when they’re as secure as possible. Most companies follow the former Facebook motto “Move fast and break things” and release rather delay and miss the launch.

    What people, and companies and authorities, then miss is the fact that our overall security is decreased, in peril, simply because it’s considered too expensive or too troublesome.

    Security should default, like encryption should be default, not optional or thought of in clear hindsight. When products are ready for sale, they should be as complete as possible. The ideal of move fast and break things should be abolished.

    Regulation

    Authorities need more transparency, less secrecy, more oversight and accountability, Schneier argues (and he isn’t alone). FBI, NSA and others don’t want encryption and want backdoors. This is completely contradictory security-wise. If the population is being preyed upon, if rogue elements can infect and steal from people, companies and authorities will also be easier targets. The more people who risk being infected and preyed upon, the more who will be in peril. Less security for civil society and people means states are less secure, although authorities want to weaken encryption, install backdoors – everyone gains access to damage, everyone looses.

    An argument often lost in the debate on regulation is that losing parties in this debate of regulation are small companies without assets or time on their side, and favour big corporations, who can much easier adapt. Big corporations are also prone to being in the attention span of the regulators and tended too, whereas smaller companies are seldom even seen, mostly overlooked. I think this is one of the most important aspects of the entire book.

    Another issue with regulation is its tendency to focus on particular technologies. Schneier’s suggestions is to “focus on human aspects of the law” instead of technologies, apps, or functions. Also, it’s better to aim for a result and let experts work to achieve that result rather than, again, focus on a specific technology.

    Summary

    Rights of the computers scientists / software developers / programmers are still very strong and they can develop pretty much what they want. We’re too short-sighted and can’t, or refuse to, see possible outcomes and changes from longer perspectives. “We accepted it because what they decided didn’t matter very much. Now it very much matters, and I think this privilege needs to end.” Just because products are digital doesn’t mean they have more right to exist, and living in a society where technology has become some kind of religious belief doesn’t mean technology is impervious to critic or bad things.

    Schneier argues that only states should have the capability to confront cyber attacks, not companies or other organizations. Considering they industry of spyware (or mercenary spyware as it’s called) I concur, though companies can help being part of cyber defense.

    One of Schneier’s guesses is that the security issues with “Internet+ will creep into their networks” in unexpected ways. Someone brings a device to work, which connects to the Internet and starts to leak data. Suddenly a company or authority realizes it has serious issues with real life implications.

    If you need a basic book about cybersecurity, without any technical details or prerequisites, this is a book for you. It’ll teach you what cybersecurity is about.

  • Book review: Reset

    Book review: Reset

    “We can reclaim the internet for civil society. The principle of restraint should be our guide.”

    The end.

    Basically, I could stop here and write no more. These are the last two sentences of the book Reset: Reclaiming the Internet for Civil Society of Ronald J. Deibert and the profound solution to problems with internet, social media, tech companies, surveillance, espionage, cyberwars, is about.

    Deibert founded The Citizen Lab in 2001 as a research group with a mission he had came up with: the dirty backside of the Internet. For once I read a boo by an author who doesn’t need to restort to the creepiest descriptions and depictions on what could happen – he controls this subject totally. You can read it between the lines, see it in the examples given, often from the research of the Citizen Lab and from various other sources, not the usual ones, he doesn’t inundate you with details and a massive amount of examples about everything that is inherently wrong with internet (for that, listen to Grumpy old geeks), and because of the chapters he’s chosen to focus on. He knows this stuff without the restless need to show how well he has (begun) to master this subject after a couple of years. The combination of chapters are his strength.

    Causes

    Surveillance capitalism as a concept is the first subject Deibert touches, writing about the omnipresent tech in our lives, the gadgets we surround ourselves with day and night, for most reasons. This has been covered by Carissa Veliz, Adam Alter (review coming) and (obviously) Shushana Zuboff (review coming), to name a few. Deibert writes about different absurd apps, ideas to capture more personal data and dangerous paths taken by companies, paths that can easily lead to authoritarian perspectives on society and societal change.

    How our addictive machines are used to spread propaganda, disinformation, misinformation, to destabilize societies, divide and rule among foreign adversaries is another bleak chapter. Companies, state actors, organisations are playing a very perilous game with democratic states and risking all progress on human rights. Insititutions are seemlingly falling apart, or at least being unable to thwart a slide towards more fragile societies.

    Thirdly, intrusive powers is about how technology is used to circumvent human rights and deliberation by (nation) states. Abuses of power become harder to track, inhibit and hold accountable. Technology is more often used to suppress minorities and people rather than elevate them.

    Aspects of climate and environment are usually completely excluded from books written by tech-related authors. The link to the natural world is many times exempt from being questioned. Two of the few eexceptions are Kate Crawford and Tung-Hui Hu, both of whom I’ll cover in time.

    I worked in politics for almost seven years and I concur with Deibert that “material factors play a major role in shaping political outcomes”, must be taken into account and politics should, at times, adapt to societal changes rather than neglecting them. Sometimes you simply follow, not lead. And tech is very much physical, material.

    No other expert, that I have encountered, has been able to combine all these issues and subjects into one coherent text about the state of the internet and democracy. A fellow Canadian and political scientist at that, Taylor Owen (yes, listen to his podcast), is the closest one we’ve got.

    Solutions

    Deibert’s a political scientist at heart, although you might think (or decieve yourself) he’s a computer scientist, and it shows when he delves into solutions. He presents the ideas and theory of republicanism, the theory “to tie down and restrain the exercise of power not only domestically, but also across borders.” Politics usually move rather slowly in democratic states and rightfully so, argues Deibert and the republicans, because decisions should take time and deliberation is necessary so as not to react emotionally or irrationally due to some fancy. Deliberation has become a word with negative connotations. Things should be decided quickly, without thoughtful processes, almost impulsively. Deibert argues that deliberation offers restraint, inhibits decisions to be simple (and often stupid) reactions to very contemporary issues. As such, restraint should be exhibited much more, in social media, in politics, on technologically related decisions. Deliberation should be a guideline, not an insult.

    At first my thoughts were similar to my reading of Beijmo’s De kan inte stoppa oss – basically, we’re f*cked. After a while I actually feel hope. For once, here’s a person with vast experience and knowledge of how bad things have turned for more than two decades, who can show us real adequate and suitable actions on a systemic level. Here are no individual recommendations on “block cookies”, “encrypt all your communications” or “refuse to use social media”. Deibert has spent more time than most humans on these issues, so what he writes is very much what we should do. We should move slower, more deliberately, in order to reclaim internet for civil society, not for states or companies.

    Conclusion

    If there’s one book to rule them all, this is the one.

  • Two sides of Cambridge Analytica

    Two sides of Cambridge Analytica

    I reminisce sitting on the bus to Arlanda Airport, frantically reading interviews with someone named Christopher Wylie in The Guardian, the breaking news on every other news channel I could possibly find on the 18th of March 2018: Cambridge Analytica and it’s role in manipulating democratic elections.

    Mindf*ck by Christoper Wylie

    Chris Wylie is a self-taught computer guy with a nack for analyzing data, especially electoral data from Canada, England and the US. He worked for the Liberal democrats in Canada, moved to England and started working for the Liberal Democrats in England. Later he started working for a small data company named SCL Group, (Cambridge Analytica was part of the Strategic Communications Laboratories Incorporated, shortened SCL or later the SCL Group) and later Cambridge Analytica (shortened CA).

    CA worked with military clients and one direction was to influence the minds and behaviour of people, especially “the enemy”. Wylie introduces the reader to the history of psycological operations, psyops. For this they needed data and data to analyze, so they turned to social media, mainly. CA began operating for parties in elections in countries, often poor ones, with weak democratic institutions.

    Wylie tells the story of how Dr. Kogan came up with the app (This is your digital life) that harvested data points and personal data on approximately 87 million Facebook users (Kaiser also tells this); how he met Steve Bannon and how Cambridge Analytica came to be baptisted.

    One of my favourite parts, and the one I remember the most, is how he travelled for CA to interviews lots of people. Countless field studies became a backbone of the Trump campaign, alongside all the digital data points collected through (primarily) Facebook. I think this side, and importance, of the story is rather underappreciated, how people like Wylie sat with hundreds or thousands of people to interview them, to better understand why they voted for conservative ideas, how to trigger people online, how to microtarget individuals or small groups. Wylie and his colleagues understood that talking to real persons in real life is where you really, basically understand people.

    Crucial to the story is that Wylie quit CA in 2014, two years before the Brexit election and the American presidental election of 2016.

    Targeted by Brittany Kaiser

    Before buying Wylie’s book I noticed another person defected from Cambridge Analytica, or actually, the SCL: Britanny Kaiser, to most people an unheard of name. After some time I watched the documentary The Great Hack on Netflix and Britanny Kaiser stepped into my mind for the first time, outside the book reviews.

    She was devoted to human rights and tireless work for NGO’s internationally. She also worked for Barack Obama’s first presidential campaign. According to her, she needed money for the parents, and got hired by SCL. She became a travelling salesperson, working somewhat closely (yet loosely) to Alexander Nix, the head of both companies, for years. She was very much involved in American politics, first with the campaign of Ted Cruz and later with Donald Trump’s campaign.

    AggregateIQ (AIQ), also called SCL Canada, was one of the companies belonging to SCL Group (Wylie also writes about the company), who became involved in the Brexit election, doing business for leave campaigns, using lots of personal data on social media and involving money the campaigns were not supposed to have.

    The most fascinating thing and that really stuck with me is Siphon and the details Kaiser provides on microtargeting people. Siphon was a dashboard with which “the campaign could keep track of ad performance in real time”. The dashboard users could adjust campaigns after going into details about every single ad (and there were many thousands) they ran. Kaiser presents costs for presenting ads to Hispanics deemed persuadables with political interests in “jobs, taxes, and education” or white women in Georgia, deemed persuadables, with interests in “debt, wages, education, and taxes”. The entire US turned into a video game, states representing theatres to be won.

    All in all: Wylie vs Kaiser

    Both Wylie and Kaiser perceive Cambridge Analytica’s work as dangerous. They give plenty of examples of how CA tried to manipulate and influence voters and suppress people from voting. One issue is that they exagerate their own and CA’s clout. They definitely were meddling in the contested elections in the US and UK, but there are so many other actors involved, and Bannon or the Mercers are not flawless superminds who work in the shadows, able to influence and manipulate everyone. Things are usually always complex. I think the main reason the story of Cambridge Analytica became so big is that it showed how social media, personal data and the dirty tactics of today work.

    There are real differences between Wylie and Kaiser, some that I need to address.

    Wylie’s contempt for Alexander Nix is unmistakable, whereas Kaiser is more forgiving and can see beyond Nix’s influence and work, and see someone charming, someone human. Wylie really has/had some difficulties getting along with people and isn’t afraid of mentioning it.

    Where Kaiser is skeptical and suspicious of The Guardian’s reporting, and Carol Cadwallar in particular, about CA, Wylie is completely dependant on this newspaper and Cadwallar in particular.

    Mind also that Wylie claims Kaiser isn’t a whistleblower, just an opportunist saving herself before the boat sank. Kaiser, on the other hand, claims Wylie was a simple low-lever worker she never really heard of, who over-exaggerated himself and his importance, while actually leaving before of the crucial years of 2015-2017. One can see the similarity to Edward Snowden’s story, proclaiming he had more power and insight than he actually did, when Wylie fills his story with conversations with the important persons (Steve Bannon and Rebecca Mercer for instance), while Kaiser doesn’t seem to understand how important she actually was to CA. She was there with Ted Cruz and Kellyanne Conway during his campaign, she was present with Steve Bannon, Conway and Donald Trump on election night in 2016. She was part of the team.

    All in all, I think her book is slightly more sincere. She acknowledges faults and mistakes, blind spots, things she refused to see during the years of 2014-2017. She didn’t seem to ask the necessary questions, albeit, in her defense, she wasn’t immersed in the technical issues or the field research the way Wylie actually was. He admits the jolt of interest and excitement of interviewing a New Age woman who is into Donald Trump, sitting in her house asking questions. Meanwhile Kaiser is constantly on airplanes brokering deals. Should she had suspected something? Shouldn’t she? Should he? Shouldn’t he? Does anyone acknowledge one’s side as “evil”, “bad” or “wrong”? Most people on this planet presume their on the right side, the good side. If I tell you your boss might be using surveillance programs on your work computer, should you examine if I’m right or do you presume I’m wrong? Are you too lazy to check, do you think me a liar, a conspirator for asserting such a thing, are you more comfortable remaining in the unknown unknown?

    Kaiser and Wylie were both useful fools, running fool’s errands for years, for rich people who understood how social media, media, elections (for instance, how few votes in specific districts are needed to winan election there) and people work. People, like Bannon, John Bolton, and the Mercers, pull strings in order to turn politics in their direction. They use a variety of companies to gather personal data, to sway people’s minds, to insert news into social media and media, to manipulate tiny details in order to turn the whole into something different. Insidious and genious.

    Still, after all, how many people actually question their jobs, their vocations, their circumstances as they happen, and not simply in hindsight? These two persons did question jobs before Cambridge Analytica really came into the headlights, even if their views and opinions differ. Their stories are well-worth reading, particularly because they differ.

  • Book review: Computational propaganda

    Book review: Computational propaganda

    Oxford Internet Institute is a go-to-zone whenever I need some knowledge about cyberspace, cybersecurity, Internet research or many other topics. It’s a fascinating interdisciplinary institute, blending what is called social data science, data science with social science (sociology or political science for instance), looking at algorithms, artificial intelligence, disinformation campaigns large scale. They have a score of PhD-students and scientists doing very interesting and exciting research. Occasionally, the scientists release books, such as this one: Computational propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, edited by Samuel C. Woolley and Philip N. Howard. The book comprises case studies of digital disinformation efforts (a main focus is certain types of bots) in nine countries, ranging from Canada and Poland to Russia and, naturally, Ukraine.

    Ukraine was hit several times on a large scale, both by cyberattacks and computational propaganda. The Russians used bots of various kinds: impact and service bots, amplifiers, complainers and trackers. Research found that civil society drove the response, which was decentralized, in contrary to the centralized focus and power of the Russian attackers. Computational propaganda was used to manipulate opinion, sow discord, discredit various Ukrainian actors and support others.

    Russia is surprisingly interesting. It was, until about two weeks ago, a country where VKontakte and Yandex competed with Facebook and Google and were the bigger actors without an askewed market. But most fascinating is that the blogosphere, and parts of social media, rely on good reporting, which results in well-built fake news. In the blogosphere posts needed to have well-founded arguments and evidence “right away, preferably with detailed, often highly technical, reports on the matter”. If that failed, hackers were brought in to expose personal mails and grievances which could be exploited against journalists or the political opposition. It meant that evidence was very important. Since 2011 the situation has deteriorated though. Perhaps the abovementioned is why the Putin regime now has completely limited access to social media, to foreign sources of information, forbidden any reporting on the war, because evidence is not to be found, not to exploited by journalists or the opposition?

    As mentioned, bots are used in various ways on the Internet, and comprise a fairly large focus in several chapters, one reason being “bots […] can operate at a scale beyond humans”. In the chapter on Canada, election interference becomes an issue in the illusive question “how can free speech be weighed against foreign interference?” How can national authorities and legislation know a foreign actor isn’t buying bots to spread information in an election, or even know parties or affiliates aren’t using bots or cyborg accounts (humans and programs together) to affect the election? Julia Slupska wrote purposefully about this, discussing the fine lines of foreign interference in elections, national sovereignty, freedom of speech, the right to reflect and make choices on our own, and how liberal democracies made attempts to limit digital interference with elections. Bots complicate online speech drastically, because anyone can use bots and cyborg accounts: parties, citizens, companies, organizations. And who is to say who is a citizen, by the way, and who constitutes a foreign interest?

    Taiwan has tried media literacy as a way to counter desinformation compared to, for instance, Canada. In both countries “positive” bots are deployed to fact-check news (which, by the way, is how some journalists work, by deploying bots to check facts before publishing news).

    Zeynep Tufekci has written about activists and the same conclusions about them can be drawn here: human rights activists and alike are targeted and trolled with, especially public ones. When the Euromaiden protests broke out in 2014, activists were instantly barraged, with harassments and threats raining down on them. Fake accounts, bots and foreign interests makes it very difficult to know who exactly is behind the wall. Still, do people change their opinions, and if so, when?

    Many of the authors have interviewed people inside various companies (PR, software developers, media companies etc), which brings an interesting insight into how fake accounts are set up, bought/sold, how bot networks work, how they track and generate data on social media users, how agenda setting and opinion targeting are really working.

    Three conclusions in, and a fourth from, the book:

    1. Focus on what is said rather than who is speaking.
    2. Social media must be design for democracy.
    3. Anyone can use bots.
    4. For computational propaganda to work, it’s necessary to influence opinion leaders (on social media) and the agenda setting media. Study how Steven Bannon worked before the election to the European parliament in 2019 or watch The Brink.

    If ever you find yourself in need of a deep introduction on computational propaganda, this book is a necessity.

    Night and blur – The Bilinda Butchers

  • Book review: The perfect police state

    Book review: The perfect police state

    This could well be a follow-up to Beijmo’s De kan inte stoppa oss. Instead of Syria as the main stage, the story and its focus is China. As Europe was an outlier in Beijmo’s book, Turkey and the US are the outliers here.

    Writer Geoffrey Cain presents himself early as the journalist he is, when travelling Xinjiang in western China, though the main character in the narrative is Maysem, an Uighur who has escaped Xinjiang. Cain introduces “the Situation”, the extreme oppression of Uighurs and other minorities in Xinjiang and the equally extreme surveillance there created by the Chinese Communist party. Its war on the three evils (terrorism, separatism and extremism) has created an omnipresent surveillance like no other, a dream of predecessors like Gestapo, Stasi and the KGB, much in the form of Sky Net (yes, sounds like Terminator) and Integrated Joint Operations Platform. Purposes are reeducation (brain wash) and genocide (e.g., by sterilization) and to dismantle the entire culture by sowing distrust between every person.

    Cain does a much better job than Beijmo at explaining his sources, his knowledge of things usually hidden, why he cross-checked interviewees, reasons for altering names and how he ended up covering this dire subject.

    Excerpt from pages 6-7 in the book, where Cain watches heavily equipped counterterrorism officers in the city of Kashgar:

    “I casually snapped a photo of the scene with my cell phone, and started to walk away. One of the police officers was wearing sunglasses with a built-in camera linked to China’s Sky Net surveillance database; the camera was connected by a wire to a minicomputer in his pocket. He turned left and glanced at me. If I were a local resident, he could probably see my name and national ID number on his lenses within seconds. Before I knew it, I was surrounded by police. I didn’t know where they came from, or how long they had been watching me. […] It’s likely that I’d been watched from the moment I arrived. Fellow journalists had warned me that my hotel room would be bugged and any laptops or smartphones I left in my room would be scanned. With 170 million cameras nationwide, some able to identify anyone from up to nine miles away, and government devices called Wi-Fi sniffers gathering data on all smartphones and computers within their range, the state probably knew a great deal about me the moment I stepped off the plane.”

    This occurred in 2017, just a couple of years after Beijmo’s story. Much has happened, albeit another country, in technology. It reminds me of the planes with cameras circling Baltimore, recording every vehicle and person in the city, as told by Bloomberg in 2016.

    By retelling Maysem’s story, and the stories of multiple other persons, we’re given a glimpse of the horrible oppression of the ethnic minorities in Xinjiang: reeducation centers, concentration camps, forced labor for Chinese and international corporations, the propaganda emanating from the Communist party, the male watchers allowed to sleep in female Uighurs homes and beds. Technologically, it’s on a totalitarian scale the Nazis could only have dreamed of. Despicable and omnipresent a system, the retelling is haunting, even if it’s not as blunt and violent as in Syria.

    Part in the creation of the surveillance stands the technological giants of China, with an unwitting Microsoft initiating the search for ever better surveillance many years ago. Some of the tech giants are Huawei, Hikvision and Tencent, the creator and holder of WeChat.

    Then, what is the actual difference technologically from our Western societies? Facebook Messenger and Alphabet/Google registering as much as they can about us on any device and with any trackers they can get hold of. Facebook with their glasses, Alphabet with their glasses and their watches. Android as an operating system with Google services installed allowing, by default, unrestrained data collection. Tech companies wish to create AI with our information and sell us as products to advertisers and corporations. In China, the Communist party wishes to create AI and control people in order to create a harmonious society without friction. The Citizen Lab released a report in 2020 named We Chat, They Watch, revealing how WeChat is used to spy on non-Chinese residents and to train AI.

    Cain’s book is a good read, and the seriousness of the issue is written on every page. To follow Maysem’s story, you need to read the book, because I won’t give you any more details. Usually, I don’t bother with graphics or design, but the cover of the book (in hardcover) is appealing.

  • Reasons and responsibilites to protect personal data

    My essay is finished. The subject was how the Swedish government wrote about personal data in two strategies, namely the so-called Digitalization strategy and the National strategy for cyber security. Who is responsible for protecting personal data and what are the reasons to protect personal data? Is there a gender perspective present?

    Personal data is omnipresent and processed by companies, organizations, state authorities, the health care sector and municipalities. Many times for no reason at all or the collection and use concern personal data that should not be processed. Simultaneously, there’s plenty of stories how personal data is harvested or scraped by actors and there’s virtually no chance to know who holds personal data and where it is.

    Reading Swedish news can weekly tell how information and personal data is lost or abused. Personal data is collected on such a large scale, it’s impossible to protect it. Data brokers, governments, authorities, all are involved in this collection, processing and dissemination. What, then, does the Swedish government write about responsibilities and reasons to protect it?

    Why the gender perspective? The report Malign Creativity: How Gender, Sex and Lies are Weaponized against Women Online was issued earlier 2021. One of the conclusions is that online gendered abuse and misinformation is a national security issue by being directed at women (in this case) systematically, resulting in less public participation from women in a democratic society. Much of the abuse is directed by actors from other countries as well. Another is how women’s personal data can be abused and weaponized against them, for instance spreading conspiracies about sex, national, sexual and gender identity.

    Does personal data relate to national security in the government texts, or more to individual security? Can the loss or abuse of personal data threaten or weaken national security?

    My main conclusions are:

    ·  the Swedish government perceives everyone as responsible for personal data, though the individual has the utmost responsibility for his/her/their personal data

    ·  the government is mainly focused on thwarting crimes like child pornography

    ·  the government doesn’t want to centralize processing of personal data

    ·  too strong a state can threaten personal data and individual security

    ·  there’s a sort of built-in contradiction when the government wants public data more accessible for the creation of services by companies (for instance)

  • Idea accepted

    After many days of straying like a lost dog around the different ideas, I settled on one idea. I managed to narrow it down, from the fields of computational propaganda, information warfare, privacy and surveillance capitalism, to the core of many of them: personal data. What does the Swedish government write about personal data and processing (of personal data)? How does the government write about personal data and security? Is it my responsibility to keep personal data safe and secure, is it the government’s responsibility? And do they apply a gender perspective? What is the perspective on information security?

    May means a lot of writing on this issue. Thus the list of interesting and intriguing books grows longer and longer still. In summer I hope I’ll manage to write a little about some of them.