Book review: The creativity code

The Creativity Code

Marcus du Sautoy is a British mathematician, who’s published several books on mathematics, appeared on TV and is highly regarded as an educator. He released a book in 2018 called The creativity code: How AI is learning to write, paint and think (du Sautoy is very fond of the word code in general, like in human code and creativity code), where he writes and ponders on the meaning of artificial intelligence and its implications for culture. He begins with admitting his existential crisis, devoting his life to mathematics, and realising AI might make computers superior to humans, rendering his skills inadequate and insufficient, even unnecessary.

First, the book focuses on how different artificial intelligence programs are created and function. He retells the famous story of DeepMind (Google DeepMind since 2014) and its advances in machine learning and AI, with examples such as AlphaGo and AlphaFold. Du Sautoy connects this urge to create AI with the human urge to create books/stories, paintings and music, then adds philosophical notions and ideas about what constitutes free will, action and philosophical reasoning. How can we be sure a program is thinking or acting by its free will versus a human? Are humans programmed to act in certain ways, and how much do we act out of free will?

ChatGPT has been prone to hallucinate, a phenomenon Kevin Roose wrote about, and the program has been limited to a certain number of responses after that. Du Sautoy mentions this (remember that the book was published five years ago) likelihood in generative AI, that they resemble drunk people fumbling in the dark. Or like Grumpy Old Geeks put it, like a guy drinking too many beers, going “Aaah, fuck it!”.

The inability to comprehend why something happens or how somethings happens in a computer program, creates ambiguity, insecurity or outright hostility towards artificial intelligence. Why an AI-programme woke up in the middle of the night was incomprehensible, thus creating a sense and foreboding of algorithmic apocalypse. Music and mathematics are closely related, and he mentions several attempts to use artificial intelligence to create music, like an app created by Massive Attack.

DeepDream is an attempt to understand the algorithms of machine learning and avoid incomprehensible black boxes. This relates to the robots created by Sony Computing Laboratory. These robots are teaching each other how to name their own movements, to communicate with each other, creating a conundrum for the humans watching, as they don’t understand the words unless they also interact with the robots. To guarantee there are no bugs or errors in the code, however, is an increasing issue and challenge.

An example of a black box is the hunt for mathematical theorems. A program spits out new theorems. The issue? No one can understand them, because they’re not told, simply lines of mathematical “code”, so to speak. No mathematician can understand what the lines actually mean. Is this kind of AI necessary or useful? Just like du Sautoy writes, mathematics needs to be told, needs to be storified, otherwise it’s incomprehensible nonsense.

Does one need emotions and the sense of physical space to understand? Does one need this understanding in order to be able to communicate with others about it? He gets philosophical, but that’s a necessary approach if we’re to comprehend artificial intelligence and its’ effects on society, rather than talk about technical details and functions.

At the end, du Sautoy returns to his anxiety, his existential crisis, about computers excelling at mathematics (and physics), but he also states that mathematics is infinite, whereas humans are not. Perhaps that’s why we need AI, he asks, because mathematics is larger than us.