On long, hard thoughts

Right now The Ezra Klein Show has a series of podcast episodes on artificial intelligence (just like early 2023). Yesterday I listened to the discussion with Nilay Patel (of course I recommend it). Among the things they discussed was how hard thinking was at risk of being discarded with the introduction of A.I. programs such as ChatGPT 4 or Claude.

People will risk being lazy. Instead of sitting there trying to typing away at your keyboard (or writing on your notepad, or mind-wandering), you’ll turn to your digital assistent. It will be doing the sorting, thinking and writing process instead of the human. It’s tempting to think it’ll help you. But in the long run it won’t.

Humans thrive when needing to learn, thinking thoroughly on a subject/an issue. Another thing you miss when taking a shortcut is learning. To write slowly requires you to learn, because you need learning in order to write: about your chosen subject, related subjects, about yourself, about people in your vicinity, the society and context you’re in, about past times.

Sune Lehmann, a Danish researcher, has lead research focusing on how people read and talk nowadays compared to earlier and found that we speak and read faster than before. Inundation of information creates disconnections in the thinking process. Thinking faster, most likely, won’t save time, as Cal Newport writes in Slow productivity and Jenny Odell in her book Saving Time. Only proper thought and genuine dedication will take you there.

After much resistance I’ve begun to “explore” the four big A.I. programs: Google Gemini (Advanced), OpenAI’s ChatGPT 4 (the premium version, that is), Microsoft CoPilot and Anthropic’s Claude. Somehow I’m not very impressed. So far I’m not sure why. Perhaps I’m used to Google Allo (when it existed), Moto X2 and its Google Now, mIRC bots in the 1990’s, advanced web searches, thus not being impressed by programs suggesting rice is a substitute for noodles. No, the earlier versions I’ve mentioned are not as competent and good as the programs from now, but they’ve made me expect more, making it harder to surprise me. Perhaps because it’s autocorrect in action?

Rice as a substitute for noodles, I already know. I also know proposals on research questions for digital transnational repression, because these programs make suggestions based on what already has been done, not what could be done without no one every having done it. As far as I understand the programs still base suggestions, autocorrect, on what has been done. Getting entangled in futuristic predictions is not what I expect, but somehow more than suggest ideas that have already been suggested many times over.

Writing on your own can be painstaking. But it creates learning. Being challenged is usually good for your mental and intellectual state of mind. Being served things on a platter won’t make you skilled or learned at things. Doing things will.