Tag: AI

  • Book review: Quantum Supremacy

    Book review: Quantum Supremacy

    Lately, I’ve become interested in quantum computing and wrote a short paper on the subject, combining the search for quantum computers and equality between nations. While doing some very basic research I encountered a video of a famous physicist talking about quantum computers as the next revolution: Michio Kaku. So I bought his book, with the very long name: Quantum Supremacy: How Quantum Computers will Unlock the Mysteries of Science – and Address Humanity’s Biggest Challenges.

    Kaku’s a very charming man, asserting Silicon Valley might become the next Rust belt, unless they can compete in the race for quantum computers, that the age of silicon is over and the power of quantum mechanics is beginning. Kaku is sympathetic, a man with a positivity, which I admire in a world of too much bleakness and passitivity. However, some of Kaku’s initial assertions are somewhat overrated and even faulty.

    These flaws initially concern me. One is the common perception and confusion regarding Google’s “Quantum supremacy” in 2019. Yes, they claimed supremacy (meaning they could perform something considerably faster than a classical computer (as digital/binary computers are called in relation to quantum)) and rather falsely so. The claim concerned an IBM computer, though IBM retorted with speeding up their computer, refuting Google’s claim. And they seem to have been right, because the computation made was actually more like a simulation than an actual computer calculating. Therefore no real supremacy.

    Secondly, the assertion that a company’s net value on the stock market is a trustworthy evidence of real progress (PsiQuantum valued at $3,1 billion initially, without any computer at all), is no evidence at all, since many companies have been valued bazillions without any sort of product or service near completion (Dot-com bubble anyone?).

    Thirdly, Kaku claims “everyone” is involved and engaged in the race of quantum supremacy, which is a lie rather than an overstatement. Looking at this map, it’s obvious very few countries and companies are actually involved and have the resources to be involved at all. Kaku depicts himself as an overly eager and enthusiastic scientist with a very positive view of the future, which is nice and badly needed, but appears naïve at times.

    After these wild assertions Kaku delves into the real stuff: quantum theory and quantum mechanics and it gets exciting – really exciting (for anyone interested, I can recommend Adam Becker’s “What is real?” as a counterweight to these extremely complex subjects, being one of the best books ever written, giving perspectives on debates, issues and controversies regarding quantum physics.) Kaku presents various interpretations on the aforementioned issues and how they’re related to quantum computing, as well as introduces various quantum computers in use today, including the quantum annealing machine architecture of D-Wave. After reading, one comprehends the immense, erratic difficulty in producing a functioning, stable and predictable quantum computer, and how far away humanity is from a dependable architecture.

    Kaku delivers his pitches about how quantum computers can evolve humankind and solve serious issues, such as climate change, biotechnology, cancer, fusion power etc. At first, I get annoyed, especially with pieces like:

    In fact, one day quantum computers could make possible a gigantic national repository of up-to-the-minute genomic data, using our bathrooms to scan the entire population for the earlist sign of cancer cells.”

    Well, no thank you, not regarding the lack of privacy and serious misuse of personal data in today’s world.

    But it gets better. Kaku brings us into the field of health care, medicine, and later physics, his specialty, and with these subject he slows down. He enters a more thoughtful, reasoning pace. He’s very dedicated to preventing and curing diseases, with a pathos I find touching. Sometimes he reaches for the stars, hoping quantum computers might aid us in finding cures humanity need in order to vanquish severe diseases afflicting us.

    I’m unqualified to know how quantum computers might help, even though he teaches me about quantum mechanics and physics, which is really enjoyable. And when he slows down, he argues pro and contra, for how quantum computers can help us live longer, and how the search for longevity can result in misery, that things are very complicated and precarious. I appreciate “on the one hand” and “on the other” when he claims that geoengineering is the last desperate step in preventing more damaging climate change, because what seems benign can become malign.

    In the end he goes futuristic again, telling us about a fantastic world with quantum computers in the year 2050. Why has this become a trend? Carissa Velíz uses this method of exemplifying the world of today in the beginning of her book, and David Runciman turns to the year 2053 when he wants to tell us how democracy dies. It’s shallow. Leave it to Ghost in the Shell.

    Then I remember his words on learning machines and artificial intelligence, writing about a conversation with Rodney Brooks from MIT’s Artificial Intelligence Laboratory on the top-bottom approach in programming machines and programs:

    … Mother Nature design creatures that are pattern-seeking learning machines, using trial and error to navigate the world. they make mistakes, but with each iteration, they come closer to success.”

    So, instead of programming every motion and logic from the top-down, AI should rather be based on bottom-down. Kaku continues here with the “Commonsense problem”, which concerns the issue of computers being far to stupid to comprehend simple things very small children easily understand. Children rapidly learn things computers cannot even begin to grasp, simply because children learn by their mistakes. Like other animals and insects, humans correct mistakes and try to do better, while computers are stuck in loops, or simply aren’t fit to understand how come a mother is always older than her children, for instance. Kaku claims classical computers aren’t able to learn so many commonsensical things. Are quantum computers needed for this step to be taken?

    I think of classical AI as Ava in the movie Ex Machina, cunning and learning, but slow and fragile. AI powered by quantum computing might rather be like Connor in the game Detroit: Become Human – an android superior to humans in plenty of ways. Because while reading this book, and some other sources, it’s clear how superior quantum computers might be in sensing, data analysis and processing copious amounts of data.

    All in all, it’s a positive book about what may happen when or if quantum supremacy is reached. By happenstance, Geopolitics decanted published a new podcast episode on quantum computing and artificial intelligence recently, an episode I recommend.

  • Book review: The creativity code

    Book review: The creativity code

    Marcus du Sautoy is a British mathematician, who’s published several books on mathematics, appeared on TV and is highly regarded as an educator. He released a book in 2018 called The creativity code: How AI is learning to write, paint and think (du Sautoy is very fond of the word code in general, like in human code and creativity code), where he writes and ponders on the meaning of artificial intelligence and its implications for culture. He begins with admitting his existential crisis, devoting his life to mathematics, and realising AI might make computers superior to humans, rendering his skills inadequate and insufficient, even unnecessary.

    First, the book focuses on how different artificial intelligence programs are created and function. He retells the famous story of DeepMind (Google DeepMind since 2014) and its advances in machine learning and AI, with examples such as AlphaGo and AlphaFold. Du Sautoy connects this urge to create AI with the human urge to create books/stories, paintings and music, then adds philosophical notions and ideas about what constitutes free will, action and philosophical reasoning. How can we be sure a program is thinking or acting by its free will versus a human? Are humans programmed to act in certain ways, and how much do we act out of free will?

    ChatGPT has been prone to hallucinate, a phenomenon Kevin Roose wrote about, and the program has been limited to a certain number of responses after that. Du Sautoy mentions this (remember that the book was published five years ago) likelihood in generative AI, that they resemble drunk people fumbling in the dark. Or like Grumpy Old Geeks put it, like a guy drinking too many beers, going “Aaah, fuck it!”.

    The inability to comprehend why something happens or how somethings happens in a computer program, creates ambiguity, insecurity or outright hostility towards artificial intelligence. Why an AI-programme woke up in the middle of the night was incomprehensible, thus creating a sense and foreboding of algorithmic apocalypse. Music and mathematics are closely related, and he mentions several attempts to use artificial intelligence to create music, like an app created by Massive Attack.

    DeepDream is an attempt to understand the algorithms of machine learning and avoid incomprehensible black boxes. This relates to the robots created by Sony Computing Laboratory. These robots are teaching each other how to name their own movements, to communicate with each other, creating a conundrum for the humans watching, as they don’t understand the words unless they also interact with the robots. To guarantee there are no bugs or errors in the code, however, is an increasing issue and challenge.

    An example of a black box is the hunt for mathematical theorems. A program spits out new theorems. The issue? No one can understand them, because they’re not told, simply lines of mathematical “code”, so to speak. No mathematician can understand what the lines actually mean. Is this kind of AI necessary or useful? Just like du Sautoy writes, mathematics needs to be told, needs to be storified, otherwise it’s incomprehensible nonsense.

    Does one need emotions and the sense of physical space to understand? Does one need this understanding in order to be able to communicate with others about it? He gets philosophical, but that’s a necessary approach if we’re to comprehend artificial intelligence and its’ effects on society, rather than talk about technical details and functions.

    At the end, du Sautoy returns to his anxiety, his existential crisis, about computers excelling at mathematics (and physics), but he also states that mathematics is infinite, whereas humans are not. Perhaps that’s why we need AI, he asks, because mathematics is larger than us.