Tag: Artifical Intelligence

  • Impacts of AI programs in public sector

    As a local part-time politician I have noticed how artificial intelligence has become popular, especially among civil servants. Everyone is urged to “try out” ChatGPT, for the sake of its brilliance, its ability to help us. However, the impact of AI does not equal considerations of environmental impacts.

    In a very near future, my suspicion is that standard environmental impact assessments (EIA) might become a procedure, a common perspective brought to the table for consideration whenever an AI program (yes, I’m fully aware they’re called models, but I will persist in calling them programs, as in computer programs) is used or acquired by public authorities. How much energy has this program, used by land surveyors, cost to train? How much has this program, used for the registry or for writing a proposal referred, affected the climate as in carbon dioxide emissions?

    Likewise, I believe there will be risk assessments in alignment with the European Union’s AI Act.In fact, the Swedish government approved an Official Government Reports Series (named Safe and reliable use of AI in Sweden) to adapt Swedish regulation to the EU level.

    Another prediction is how AI will not remain large LLMs or programs. Instead, the public sector will use small, specific programs, perhaps even local programs similar to DeepSeek, to training on local data for local use.

    AI giants have increased their carbon emissions since the AI boom began. Microsoft has increased emissions, and so has Google, in both cases related to data centres focused on AI. I read in the Washington Post how Eric Schmidt (now of the Special Competitive Studies Project) asserted environmental concerns need to step back in favour of development of energy for the sake of AI. AI programs will simply solve climate change and environmental destruction. What a relief.

  • Book review: Unmasking AI

    Book review: Unmasking AI

    I simply don’t have the time to review the books I read concurrently with papers and book chapters from the courses I study. Only in the last month, I’ve read about 30 papers on mining, Indigenous peoples, sovereignty and territory. So, Joy Buolamwini’s book Unmasking AI: My mission to protect what is human in a world of machines I actually finished in April last year.

    Buolamwini is a computer scientist from MIT, who rose to stardom while doing research proving how artificial intelligence programs were trained on very skewed and distorted amounts of data. She mentions the Shirley card: a photographic standard with a white woman as the “ideal composition and exposure setting.” This is included in Brian Christian’s brilliant The Alignment Problem, the book where I found her name.

    Overarching aims of the book

    There are two important terms in the book. Algorithmic bias occurs when one is disfavoured or discriminated by an AI-program, and coded gaze is evidence of encoded discrimination and exclusion of certain people in technology. As Buolamwini does research on artificial intelligence in image processing, she discovers how algorithmic bias underlies many programs, and the coded gaze excludes her own face from being detected by an AI program.

    I agree wholeheartedly with her overarching approach: artificial intelligence will not solve climate change, racism or poverty. In the words of Rumman Chowdhury, “the moral outsourcing of hard decisions to machines does not solve the underlying social dilemmas.” Buolamwini continues: “AI reflects both the aspirations and limitations of its makers.” We must take initiatives to also halt our stop tools.

    Another important term is the AI functionality fallacy, normally called hallucination, which is, simply, when “the system doesn’t work properly,” though most people will be fooled by the program itself and believe it is working.

    Facial recognition technologies are the core of her research, as she states “there are many different types of face-related tasks that machines can perform.” I’m grateful she separates face/facial detection and facial recognition. Not many people explain the difference, which can be tremendous. When a program can detect a face, it’s face/facial detection. Facial recognition is when the program can discern faces, separate them, and might even be able to see who is who.

    Technological details

    For being a book for lay people, she takes a pleasant dive into technological details of how artificial intelligence programs can work, on nodes and neural networks. She asks very important ethical questions, which constitute a cornerstone of this book and her fame: “Was the data obtained with consent? What were the working conditions and compensation for the workers who processed the data?”

    Furthermore, she explains the importance of classification and strategic sampling of things and people. Being a student, her methods and choices of data collection are interesting. It matters much which data you choose and why you actually choose it. It needs thorough discussion in research. Motifs, reasons, usage should be transparent and well-comprehended by others.

    The power of labeling – ground truth – is in the hands of a very few people. What is depicted in an image? Few people hold the power to classify people, animals, plants, cars etc, and the abilities such as gender, sex, skin tone. The world gets a little more static in the form of gender and sexual orientation, although people are more fluid, not a brand stuck in time.

    Methodological issues

    The publication of her master’s thesis had implications for people’s jobs in companies related to her research. This very thing we discussed in one of my courses: how do you justify publication of your research if people or organisations are named? Is it truly justified? Why do you want to achieve: Attention, improvement, a job? This is an issue for me, since she mentions she excluded the worst findings: that would’ve been complete heresy in the social sciences. You include the general and the deviant, you don’t unselect data. That would severely damage your credibility.

    Affronted celebrity

    She, among others, is, as she writes later in the book, excluded as a participant on 60 Minutes. She is affronted and aggrieved. Together with teams of people she writes a petition to CBS.

    How many of us have “teams of people” signing a petition to a TV channel? In the book she mentions how she put at a lot of work into this participation, while also writing the last of her PhD dissertation. Funnily, we discussed this kind of behaviour in class : what happens to a researcher who’s used to stand in the spotlight, who’s used to be listened to? What happens when a researcher becomes an activist and a media star? Here, I think Buolamwini doesn’t see clearly, even when she admits she’s used to the spotlight.

    It could be that we, in this regard, live in very different countries, since a researcher here couldn’t really do all the commercials, sponsor documentaries covering themselves and running organisations doing similar work to their research project. It could be that I’m a social scientist, so this kind of critical thinking is supposed to pervade our education. It could be that our self-confidence differs greatly. You simply can’t take yourself that much for granted. But I do clearly think that she couldn’t blame anyone but herself for doing too many things at the same time. There’s only so much one person can do and accepting limitations is necessary, without blaming others. Furthermore, I disagree with her on being excluded as a black woman only. Most likely because she can’t see her own privilege after all the media attention. How many people achieves this status after a few years? I know researchers who can’t even get published in local news because they’re deemed not interesting or irrelevant. I know researchers who perceive you as mainstream, lame and non-critical if you participate on commercial conferences, on TV and in commercials, that you’re part of the system you pretend to fight. Lastly, simply because you think what you’re doing is important, doesn’t mean everyone else will, or at least not all the time.

    This is the only bad part of the book, but it is bad. Lamenting not being shown on national American TV, as if everyone famous is entitled to it, as if being famous for a cause equals the rights to be seen, heard, listened to.

    All in all

    Still, she’s impressive. During an ad campaign for Olay, she delves on the advantages and disadvantages of doing a campaign for skin care. It can seem shallow at first, but I definitely comprehend the reasons to do it. Women of colour are many times excluded, not being targeted as consumers. And why shouldn’t people want to look good, even if it depends on skin products? Why shouldn’t activists have the right to promote something they deem is important? To fight for inclusion and the right to be vain or good looking or whatever is part of democracy.

  • Harvesting US agencies for Grok?

    Few have escaped the unconstitutional encroachments of Elon Musk(olini) (professional manchild) into US agencies, with his team of followers (at least 37 people, because the portrayal of the single, man “genius” simply doesn’t exist – they always rely on lots of followers and fixers).

    Ostensibly they’re distmantling agencies (USAID was instituted by the Republican party in the 1990’s by the way) and “saving expenditures” for the sake of saving money and perhaps decreasing the US debt. Personally, I believe the real purpose is, primarily, to harvest as much data on the population as possible, to provide all of it to Muskolinis Grok AI. The scaling laws need more data, and why not harvest secret and non-official data? Without it, AI programs can neither proceed nor progress, and now Grok has an advantage. Whoever wins this war of artificial intelligence wins all of it (it is presumed) and can control the population with extremely sensitive data on virtually every American.

    Secondly, Grok will have the capacity to surveil and weed out uncomfortable and inconvenient employees in the federal bureaucracy. If necessary, they’ll fire more people and bring in loyalists and sycophants to fill the vacant places.

    As Ezra Klein put it: “Congress is a place where you can lose. […] Trump is acting like a king, because he’s too weak to govern like a president.” So, expect no resistance from the weak Republicans in Congress. And this is what happens to democracy and bureaucracy when “entrepreneurs” think they can play government.

    From now on, I’ll follow the Canadian motto “Buy Canadian”, though in the way of “Do not buy American whenever you can avoid it.”

  • A visit to Brussels

    A visit to Brussels

    In October our class went to Belgium with the purpose of visiting the European Week of Regions and Cities 2024 during our course on the European Union. We lived in Mechelen, a nice city north of Brussels. EURegionsWeek spans four days. Representatives from regions and cities from all over the EU, as well as academics and lobbyists, gather in Brussels to listen to workshops and panels on various themes, such as youth and democracy, energy and climate, digitisation and artificial intelligence.

    Each of us choose one or two themes, which could align with the assignment. Mine were AI and energy. My assignment focused on the implementation of the AI Act in a Swedish municipality’s social service.

    The most interesting panel discussion focused on the European semiconductor industry, with barely ten attendants, while the most boring panel discussion focused on a members personal interest in climate change (with a personal speech which gave no clue about her actual work for the organization she represented).

    To travel with the teachers was nice. They knew Brussels and EURegionsWeek well. We drank some Belgian beer (goeze) and wandered the streets of Brussels.

  • Book review: The Alignment Problem

    Book review: The Alignment Problem

    The Alignment Problem: Machine Learning and Human Values. How to compress one of the toughest, most intellectually demanding issues of humanity into one book of about 300 pages? I certainly wouldn’t be up for the task. Brian Christian is. A computer scientist inclined on philosophy, and through this book (at least) on psychology too.

    Probably you’ve heard about reinforcement learning in conversations on AI. It originates from psychology and animal behaviourism, like so many other parts of the field of AI (neural networks and temporal differences are two others), while others touch philosophical issues and conundrums humans have pondered on for centuries. Brian Christian, like Johan Harri, travels the world to interview lots of people about how to get machines to understand and obey humans. 

    What’s it like to code artificial intelligence? Think of AI-programming as asking for wishes from a genie. How do you truly and literally articulate three questions for things (for instance, what is a thing or a question even, where does the thing or question begin and where does it end)? How can you ever be sure the provider (program) comprehends the three things precisely the same way as you do? 

    You wish for a long, healthy life. What is long? Stretched out, or with a lifespan beginning and ending clearly? Longevity as in an average human life now or 2.000 years ago, or 120 years in to the future? Long as a star? Long as a giraffe’s neck? What’s included in the word healthy? Not being obese? Not being lanky? Being muscular? Living healthy for 30 years and then suddenly die of an aneurysm? Or to live healthy for 85 years and the fade away during two decades? Does healthy mean you start to smoke, without any repercussions, and it thus causes you to die of a lung or heart disease you otherwise wouldn’t? Does it mean you can suffer from terrible diseases if you catch or cause them, but never have a cold or a light fever? Besides, what is life? (Should you rather be able to wish with your inner thoughts depicted to the receiver? Then what happens if you’re interrupted by other thoughts in those thoughts?)

    These are very simple examples of how hard it is to code, to express what you wish a program to execute. What you wish, you’re very unlikely to express in an exact manner to a machine because you can’t project every single detail to it: The alignment problem.

    Compared to plenty of writers on AI or code (perhaps except for Scott J. Shapiro) Christian really delves into deep issues here. He won’t let you simply read the book, but dives into details and present thoughts, then provoke you with delving deeper and then even deeper. He reasons on driving, for instance: Male drivers are generally worse than female drivers. Applying AI as a solution to this issue could mean male drivers will be targeted primarily, which means fewer targeted female drivers. Thus driving could be worse, since the female drivers then would in general be driving less safe than the remaining male drivers = worse traffic. Applying AI seems simple and straightforward, but very seldom is. Christian concludes that “alignment will be messy.”

    To program artificial intelligence, one also needs to understand politics, sociology and gender – social sciences – because what do words like “good”, “bad”, “accurate”, “female” really mean? Any word needs some context and what is that context, or those contexts? Christian mentions sociologists who can’t reduce models dichotomously, whereas that’s how computer scientists believe reality is perceived. They need to cooperate to adjust the programs to reality as best as they can, which may not always be feasible. As Shapiro writes, you simply can’t reduce reality this way. And messy as it is, you can’t turn the AI program neutral/blind either (just ask Google about the black Nazi soldier produced by Google Bard/Gemini.).

    The alignment problem is thorough. Christian immerses the reader into fields of temporal difference (TD) and sparsity in reinforcement learning, independent and identically distributed in (i.i.d.) supervised/unsupervised learning, redundant coding (like the discussion on gender above mentioned), simple models, saliency, multitask nets and a bunch of guys sitting around the table (BOGSAT).

    This book is a true achievement. This book is a gift to humanity. This is the one book on artificial intelligence to read.

    (If you’re disinclined to read the book, I recommend this podcast episode (though almost three hours long) instead. https://80000hours.org/podcast/episodes/brian-christian-the-alignment-problem/)

  • On long, hard thoughts

    Right now The Ezra Klein Show has a series of podcast episodes on artificial intelligence (just like early 2023). Yesterday I listened to the discussion with Nilay Patel (of course I recommend it). Among the things they discussed was how hard thinking was at risk of being discarded with the introduction of A.I. programs such as ChatGPT 4 or Claude.

    People will risk being lazy. Instead of sitting there trying to typing away at your keyboard (or writing on your notepad, or mind-wandering), you’ll turn to your digital assistent. It will be doing the sorting, thinking and writing process instead of the human. It’s tempting to think it’ll help you. But in the long run it won’t.

    Humans thrive when needing to learn, thinking thoroughly on a subject/an issue. Another thing you miss when taking a shortcut is learning. To write slowly requires you to learn, because you need learning in order to write: about your chosen subject, related subjects, about yourself, about people in your vicinity, the society and context you’re in, about past times.

    Sune Lehmann, a Danish researcher, has lead research focusing on how people read and talk nowadays compared to earlier and found that we speak and read faster than before. Inundation of information creates disconnections in the thinking process. Thinking faster, most likely, won’t save time, as Cal Newport writes in Slow productivity and Jenny Odell in her book Saving Time. Only proper thought and genuine dedication will take you there.

    After much resistance I’ve begun to “explore” the four big A.I. programs: Google Gemini (Advanced), OpenAI’s ChatGPT 4 (the premium version, that is), Microsoft CoPilot and Anthropic’s Claude. Somehow I’m not very impressed. So far I’m not sure why. Perhaps I’m used to Google Allo (when it existed), Moto X2 and its Google Now, mIRC bots in the 1990’s, advanced web searches, thus not being impressed by programs suggesting rice is a substitute for noodles. No, the earlier versions I’ve mentioned are not as competent and good as the programs from now, but they’ve made me expect more, making it harder to surprise me. Perhaps because it’s autocorrect in action?

    Rice as a substitute for noodles, I already know. I also know proposals on research questions for digital transnational repression, because these programs make suggestions based on what already has been done, not what could be done without no one every having done it. As far as I understand the programs still base suggestions, autocorrect, on what has been done. Getting entangled in futuristic predictions is not what I expect, but somehow more than suggest ideas that have already been suggested many times over.

    Writing on your own can be painstaking. But it creates learning. Being challenged is usually good for your mental and intellectual state of mind. Being served things on a platter won’t make you skilled or learned at things. Doing things will.

  • Book review: Quantum Supremacy

    Book review: Quantum Supremacy

    Lately, I’ve become interested in quantum computing and wrote a short paper on the subject, combining the search for quantum computers and equality between nations. While doing some very basic research I encountered a video of a famous physicist talking about quantum computers as the next revolution: Michio Kaku. So I bought his book, with the very long name: Quantum Supremacy: How Quantum Computers will Unlock the Mysteries of Science – and Address Humanity’s Biggest Challenges.

    Kaku’s a very charming man, asserting Silicon Valley might become the next Rust belt, unless they can compete in the race for quantum computers, that the age of silicon is over and the power of quantum mechanics is beginning. Kaku is sympathetic, a man with a positivity, which I admire in a world of too much bleakness and passitivity. However, some of Kaku’s initial assertions are somewhat overrated and even faulty.

    These flaws initially concern me. One is the common perception and confusion regarding Google’s “Quantum supremacy” in 2019. Yes, they claimed supremacy (meaning they could perform something considerably faster than a classical computer (as digital/binary computers are called in relation to quantum)) and rather falsely so. The claim concerned an IBM computer, though IBM retorted with speeding up their computer, refuting Google’s claim. And they seem to have been right, because the computation made was actually more like a simulation than an actual computer calculating. Therefore no real supremacy.

    Secondly, the assertion that a company’s net value on the stock market is a trustworthy evidence of real progress (PsiQuantum valued at $3,1 billion initially, without any computer at all), is no evidence at all, since many companies have been valued bazillions without any sort of product or service near completion (Dot-com bubble anyone?).

    Thirdly, Kaku claims “everyone” is involved and engaged in the race of quantum supremacy, which is a lie rather than an overstatement. Looking at this map, it’s obvious very few countries and companies are actually involved and have the resources to be involved at all. Kaku depicts himself as an overly eager and enthusiastic scientist with a very positive view of the future, which is nice and badly needed, but appears naïve at times.

    After these wild assertions Kaku delves into the real stuff: quantum theory and quantum mechanics and it gets exciting – really exciting (for anyone interested, I can recommend Adam Becker’s “What is real?” as a counterweight to these extremely complex subjects, being one of the best books ever written, giving perspectives on debates, issues and controversies regarding quantum physics.) Kaku presents various interpretations on the aforementioned issues and how they’re related to quantum computing, as well as introduces various quantum computers in use today, including the quantum annealing machine architecture of D-Wave. After reading, one comprehends the immense, erratic difficulty in producing a functioning, stable and predictable quantum computer, and how far away humanity is from a dependable architecture.

    Kaku delivers his pitches about how quantum computers can evolve humankind and solve serious issues, such as climate change, biotechnology, cancer, fusion power etc. At first, I get annoyed, especially with pieces like:

    In fact, one day quantum computers could make possible a gigantic national repository of up-to-the-minute genomic data, using our bathrooms to scan the entire population for the earlist sign of cancer cells.”

    Well, no thank you, not regarding the lack of privacy and serious misuse of personal data in today’s world.

    But it gets better. Kaku brings us into the field of health care, medicine, and later physics, his specialty, and with these subject he slows down. He enters a more thoughtful, reasoning pace. He’s very dedicated to preventing and curing diseases, with a pathos I find touching. Sometimes he reaches for the stars, hoping quantum computers might aid us in finding cures humanity need in order to vanquish severe diseases afflicting us.

    I’m unqualified to know how quantum computers might help, even though he teaches me about quantum mechanics and physics, which is really enjoyable. And when he slows down, he argues pro and contra, for how quantum computers can help us live longer, and how the search for longevity can result in misery, that things are very complicated and precarious. I appreciate “on the one hand” and “on the other” when he claims that geoengineering is the last desperate step in preventing more damaging climate change, because what seems benign can become malign.

    In the end he goes futuristic again, telling us about a fantastic world with quantum computers in the year 2050. Why has this become a trend? Carissa Velíz uses this method of exemplifying the world of today in the beginning of her book, and David Runciman turns to the year 2053 when he wants to tell us how democracy dies. It’s shallow. Leave it to Ghost in the Shell.

    Then I remember his words on learning machines and artificial intelligence, writing about a conversation with Rodney Brooks from MIT’s Artificial Intelligence Laboratory on the top-bottom approach in programming machines and programs:

    … Mother Nature design creatures that are pattern-seeking learning machines, using trial and error to navigate the world. they make mistakes, but with each iteration, they come closer to success.”

    So, instead of programming every motion and logic from the top-down, AI should rather be based on bottom-down. Kaku continues here with the “Commonsense problem”, which concerns the issue of computers being far to stupid to comprehend simple things very small children easily understand. Children rapidly learn things computers cannot even begin to grasp, simply because children learn by their mistakes. Like other animals and insects, humans correct mistakes and try to do better, while computers are stuck in loops, or simply aren’t fit to understand how come a mother is always older than her children, for instance. Kaku claims classical computers aren’t able to learn so many commonsensical things. Are quantum computers needed for this step to be taken?

    I think of classical AI as Ava in the movie Ex Machina, cunning and learning, but slow and fragile. AI powered by quantum computing might rather be like Connor in the game Detroit: Become Human – an android superior to humans in plenty of ways. Because while reading this book, and some other sources, it’s clear how superior quantum computers might be in sensing, data analysis and processing copious amounts of data.

    All in all, it’s a positive book about what may happen when or if quantum supremacy is reached. By happenstance, Geopolitics decanted published a new podcast episode on quantum computing and artificial intelligence recently, an episode I recommend.