Category: Reflection

  • Impacts of AI programs in public sector

    As a local part-time politician I have noticed how artificial intelligence has become popular, especially among civil servants. Everyone is urged to “try out” ChatGPT, for the sake of its brilliance, its ability to help us. However, the impact of AI does not equal considerations of environmental impacts.

    In a very near future, my suspicion is that standard environmental impact assessments (EIA) might become a procedure, a common perspective brought to the table for consideration whenever an AI program (yes, I’m fully aware they’re called models, but I will persist in calling them programs, as in computer programs) is used or acquired by public authorities. How much energy has this program, used by land surveyors, cost to train? How much has this program, used for the registry or for writing a proposal referred, affected the climate as in carbon dioxide emissions?

    Likewise, I believe there will be risk assessments in alignment with the European Union’s AI Act.In fact, the Swedish government approved an Official Government Reports Series (named Safe and reliable use of AI in Sweden) to adapt Swedish regulation to the EU level.

    Another prediction is how AI will not remain large LLMs or programs. Instead, the public sector will use small, specific programs, perhaps even local programs similar to DeepSeek, to training on local data for local use.

    AI giants have increased their carbon emissions since the AI boom began. Microsoft has increased emissions, and so has Google, in both cases related to data centres focused on AI. I read in the Washington Post how Eric Schmidt (now of the Special Competitive Studies Project) asserted environmental concerns need to step back in favour of development of energy for the sake of AI. AI programs will simply solve climate change and environmental destruction. What a relief.

  • On long, hard thoughts

    Right now The Ezra Klein Show has a series of podcast episodes on artificial intelligence (just like early 2023). Yesterday I listened to the discussion with Nilay Patel (of course I recommend it). Among the things they discussed was how hard thinking was at risk of being discarded with the introduction of A.I. programs such as ChatGPT 4 or Claude.

    People will risk being lazy. Instead of sitting there trying to typing away at your keyboard (or writing on your notepad, or mind-wandering), you’ll turn to your digital assistent. It will be doing the sorting, thinking and writing process instead of the human. It’s tempting to think it’ll help you. But in the long run it won’t.

    Humans thrive when needing to learn, thinking thoroughly on a subject/an issue. Another thing you miss when taking a shortcut is learning. To write slowly requires you to learn, because you need learning in order to write: about your chosen subject, related subjects, about yourself, about people in your vicinity, the society and context you’re in, about past times.

    Sune Lehmann, a Danish researcher, has lead research focusing on how people read and talk nowadays compared to earlier and found that we speak and read faster than before. Inundation of information creates disconnections in the thinking process. Thinking faster, most likely, won’t save time, as Cal Newport writes in Slow productivity and Jenny Odell in her book Saving Time. Only proper thought and genuine dedication will take you there.

    After much resistance I’ve begun to “explore” the four big A.I. programs: Google Gemini (Advanced), OpenAI’s ChatGPT 4 (the premium version, that is), Microsoft CoPilot and Anthropic’s Claude. Somehow I’m not very impressed. So far I’m not sure why. Perhaps I’m used to Google Allo (when it existed), Moto X2 and its Google Now, mIRC bots in the 1990’s, advanced web searches, thus not being impressed by programs suggesting rice is a substitute for noodles. No, the earlier versions I’ve mentioned are not as competent and good as the programs from now, but they’ve made me expect more, making it harder to surprise me. Perhaps because it’s autocorrect in action?

    Rice as a substitute for noodles, I already know. I also know proposals on research questions for digital transnational repression, because these programs make suggestions based on what already has been done, not what could be done without no one every having done it. As far as I understand the programs still base suggestions, autocorrect, on what has been done. Getting entangled in futuristic predictions is not what I expect, but somehow more than suggest ideas that have already been suggested many times over.

    Writing on your own can be painstaking. But it creates learning. Being challenged is usually good for your mental and intellectual state of mind. Being served things on a platter won’t make you skilled or learned at things. Doing things will.

  • Technoreligion: youth and grown-ups

    Recently, the Swedish right-wing government proposed to outlaw mobile phones in classrooms (compulsory school) because negative results from the PISA results regarding school, “measures 15-year-olds’ ability to use their reading, mathematics and science knowledge and skills to meet real-life challenges.” Previous center governments have proposed the same.

    Simple causation: Bad results leads to conclusion about bad influence leads to obvious solution.

    Approximately 80 % of all schools already have a ban on phones during the school day. So, the proposals are just populist-like ideas, supposed to prove that governments are perceptive and .

    Rather, I see the usual conclusion as backwards, which is an unpopular stance whenever I discuss it with grown-ups. The solutions, from my perspective, is very simple: The grown-ups, first and foremost, must stop using phones so recklessly, so disrespectfully, so much. Don’t expect children and adolescents to put down their devices when the grown-ups show how it’s done: device in hand at all times.

    Our digital toys (a distant relative called them theme parks) are so precious we simply can’t let go of them. But since children are children and can be harmed, they should learn their place and proper behaviour. The idea that grown-ups are role models is completely absent in this conversation.

    It’s considered rude to question the technological “evolution”. Virtually nothing is possible or plausible or feasible to hinder. It’s “development”, the ubiquitous unstoppable force of linear thought and progression.

    Plenty of young children have SnapChat or TikTok. Amazingly, they’re not even 13 years old. Does that matter? They’re not allowed to use these platforms belonging to these companies because they underage. It’s in violation to terms of service. Do parents seem to care? No. Do companies seem to care? No.

    Discord, among others, and video games hav been blamed for exposing children and youth to extremism. Or rather, Discord is a platform serving extremism, while video games are the vehicle used by the extremists to disseminate information, misinformation and disinformation.

    YouTube Kids providing 9-year-olds and 14-year-olds recommendations to videos about guns and gun-related violence, gun-modification and injuries.

    Studies in the U.S. show deteriorating health related to teens, especially girls, which coincides with the introduction of the smartphone. Adam Alter, in his book Irrestible: The rise of addictive technology and the business of keeping us hooked, mentions a conversation by a young teenage girl lamenting the presence of a friend, who doesn’t listen to her, because the friend is too focused on her phone to care. Absence in presence.

    Imagine being ten years old and get your first smartphone. You have the usual flora of social media apps. Let’s pretend you receive approximately, generally, 100 notifications during 24 hours. 24 hours, everyday for ten years. When the child has left adolescence and become an adult, they have received 100 * 365 * 10 notifications = 365 000 notifications. A notification is designed to create a reaction: vibration, sound, light. Even one reaction is enough to usually increase the pace of your heart. 365 000 bodily reactions in your child.

    Thus, we have the issue of parents exposing their own children systematically on social media. Once upon a time I blogged about being a parent, even co-creating a podcast on the subject. One reason was the almost total lack of parents writing about being a parent, what it entails. Your tips and tricks, shortcomings, logistics, fears. Most bloggers wrote about consumption or clothing: buying things for your children. And the, almost, relentless pictional depiction of children dressed in clothes, playing with computers or iPads. Grown-ups exposing children. I even saw pictures of children sitting on the toilet, smiling, being only three years old. Imagine a teenager knowing you did that. Once you turn 80 and need to rely on a walker or support taking a shower – imagine your grandchildren photographing you, posting the picture with a funny comment on the Internet. Lucky you!

    It’s fascinating how much the grown-ups want to keep squeezing their own phones. No matter how much youth suffer or are exposed, we simply can’t let go of the screen ourselves. When will it stop?

    My argument? As long as grown-ups can’t be grown-ups, the school cannot help solve the issue. The governmental solution is based on the false causation and premise that mobile phones are used in compulsory school. But in many schools, electronic devices aren’t allowed. They’re given to the staff, kept in lockers during school and returned to the children when they leave for home.

    Since the issue isn’t children using phones in schools, my argument is that grown-ups are to blame. They must stop being addicted, much like a smoking parent must stop smoking rather than stopping their children from seeing parents smoking, or this issue won’t be solved. But since we’re a technoreligious society, where grown-ups hold the power in adherence to laws, the grown-ups won’t put down their phones.

  • The debate on hybrid warfare and Russia

    During the conference Society and defense (Folk och Försvar), the Supreme Commander of the Swedish Armed Forces said that the Swedish people must acknowledge the contingency of war. It caused an outcry. Presumably, this is posed as a fact to make people realize that we cannot be the ever-present observer, never involved or engaged in truly troublesome things with exogenous causes. We tend to have an immunized perception of catastrophes. What disturbs me is rather how the term of hybrid warfare is used. It’s misleading, to say the least.

    Hybrid warfare is, according to an article on a book about hybrid warfare, a combination of “conventional warfare with non-conventional warfare”. According to this post on NATO’s website, “hybrid warfare entails an interplay or fusion of conventional as well as unconventional instruments of power and tools of subversion”. Prior to this, the author writes “With the advent of modern hybrid warfare, they are less and less about lethal or kinetic force.”

    Of course it depends on how war is defined (usually about a minimum of 1.000 people dead in a conflict between states or defined groups over a period of time, like two years). Is informational “warfare” an act of war? Is disinformation and misinformation “warfare”? Is cyber espionage “warfare”? Is hacking by malware and payloads “warfare”? Are they kinetic or lethal? Both of these sources refer to the Russian interference in Ukraine starting 2014, when Russia mixed conventional weapons on the ground (mainly) with disinformation and deniability, without actually declaring war on Ukraine.

    This also requires referring to the so-called Gerasimov Doctrine by Mark Galeotti. Valery Gerasimov is still the (not-so-succesful) Russian Chief of the General Staff, and current commander of the Russian forces in Ukraine, who in 2014 (what an occurence) mentioned different strategies to tackle the superiority of the West. This lead to the birth of the so-called Gerasimov Doctrine, a doctrine influential in the eyes of many Westerns, as “new thinking” on war. To combine cyber weapons and kinetic force was perceived as a new paradigm. Galoetti didn’t see it as novel.

    I turn to the political scientist Lucas Kello, author of The Virtual Weapon and International Order, to look for definitions of terms and concepts. Although he discusses the terms and consequences of cyberweapons in cyberspace, the discussion can be used to comprehend how words are used and related to a subject:

    The crucial definitional criterion of a virtual weapon lies in its intended and possible effects.

    Does Russia intend to wage war against Finland or Sweden? Most likely not. Are we at war, a prerequisite of the term? No. Kello continues:

    Cyberattack need not result in physical destruction to pose a serious danger to society. […] It is reasonable to impose limits on this language.

    You need to know when to ascribe a term and when not to. Lucas Kello asserts that even though virtual information “has become force itself” in certain situations, psychology and information have long been part of war strategy. Still, they are usually not regarded as a kinetic force, as kinetic weapons, because “human death is the highest form of physical damage.” It remains difficult to harm humans by hacking computers or deploying A.I. (yes, I’m aware of a patient dying when hackers shutdown a hospital in a ransomware attack, and yes, I’m aware of the false suggestion that an A.I.-programme killed an operator, which simply was a simulation made by humans, with no real people involved in the simulation itself).

    It’s important to keep terms apart, and as coherent and adequate as possible. Including more only fuzzies concepts, makes them so general they can’t be applicable anymore. Another important reason to keep terms clean: Sweden and Finland are not at war with Russia, which the term of hybrid warfare denotes, but still mass media and experts utters the term often. Russia is trying to strain and contain our countries, making hybrid threats the proper term rather than hybrid warfare.

    Perhaps, though, I propose a new word. Or two: