As can be expected, there is a big debate raging about the pros and cons of Artificial Intelligence (AI), not least also in our company, where one of my colleagues professes to be a bit of an expert in the matter (he does on the side have his own AI consulting business). My colleague is convinced that in a matter of years AI will surpass human intelligence – and hence be well suited to replace us or at least make as good (or probably better) decisions than most people would. I am not convinced.
Daniel Kahneman, an Israeli-American psychologist best known for his work on the psychology of judgment and decision-making as well as behavioural economics (he was co-awarded the Nobel Price for Economics in 2002) analysed human thinking and decision-making processes in his famous book Thinking, fast and slow. He distinguished between two types of decisions: System 1: Fast, intuitive, and automatic. It helps us make quick judgments and decisions with little effort. However, it is prone to biases and errors. And System 2: Slow, deliberate, and effortful. It is responsible for logical thinking, problem-solving, and critical analysis, but it requires more mental energy and is often lazy.
AI, especially current generative models, excel at tasks that align with System 1 thinking – fast, pattern-based, and intuitive responses. System 2 thinking on the other hand requires deep understanding, critical analysis, and reflection – things AI lacks. Artificial Intelligence processes vast amounts of data and recognizes patterns but does not “understand” concepts the way humans do. It does not engage in true reasoning but rather simulates it. And while AI can generate new ideas based on existing patterns, true creativity often comes from deep contemplation, personal experiences, and abstract thought. Human innovation often involves breaking from past patterns. System 2 thinking is essential for ethical decision-making, requiring careful weighing of values, intentions, and long-term consequences. AI lacks a moral compass and cannot engage in genuine ethical reasoning – it simply follows programmed rules or statistical probabilities. And I haven’t even mentioned yet the human ability of self-awareness and the way we reinterpret information (contextual nuance), something AI for a while yet is likely to be struggling with. And last but not least human emotions and intuition (the famous gut feeling) which influence our thought processes. AI can mimic emotions but does not experience them, which limits its ability to make human-centered decisions in areas like leadership, relationships, and personal growth.
Then there are the rare, unpredictable events — Black Swans — that have massive consequences as described in Nassim Nicholas Taleb’s The Black Swan: The Impact of the Highly Improbable, such as the 9/11 attacks or the financial crisis of 2008. In hindsight we try to rationalise them, pretending they were obvious and would have been predictable. Taleb however argues that people rely too much on past data and fail to account for outlier events. We assume the future will resemble the past, which makes us blind to rare, high-impact occurrences. AI systems rely on past data to make predictions, but Black Swan events break historical patterns: AI trading algorithms, which assume market stability, were at least in part responsible for the financial crisis of 2008 and exposed the risk of over-reliance on quantitative models. Or take misinformation and deepfakes: AI-generated content could unexpectedly reshape politics, influencing elections or spreading misinformation in ways we haven’t foreseen. AI models often fail under unexpected conditions — for example, facial recognition on edge cases, or self-driving cars struggling in rare but deadly situations.
So I ask again: Will Artificial Intelligence ever outsmart the human brain? And do we really want it to? Clearly there would be advantages: Complex decisions can be ‘outsourced’, but this also then raises the question of liability: Who is responsible if a self-driving car kills a pedestrian? The passenger of the vehicle who failed to supervise the car? The car manufacturer? The company which provided or programmed the self-driving mechanism? Would you want a government to entrust the decision to, for example, launch a nuclear strike (or counter-strike for that matter) to an algorithm? In the past I have often (but not always!) found, that I can trust my gut feeling, but I am not sure – at least not yet – that I would trust even the most sophisticated computer and software more than my most personal past experiences.
As you have probably realised by now, the future of AI raises for the moment at least as many questions as it answers. AI is powerful for automating System 1 tasks, providing suggestions, and analysing large datasets. However, for tasks requiring deep reasoning, ethical judgment, creativity, self-awareness and long-term planning, human slow thinking remains irreplaceable. Some of our decisions will also depend on the expert advice of a specialist. Would you consider undergoing surgery on the basis of a diagnosis established by AI? Or to what extent would you expect a doctor having validated the automated findings?
I am using AI in a personal and professional capacity quite regularly now, and I am very pleased with the results. AI may augment System 2 thinking, but it cannot and maybe should not replace it – in my humble opinion at least not for the foreseeable future.
Agreed. The closer technology gets to automation, the more questions will be aised about things like liability. That’s the big reason why I’m sceptical. We’ve already seen 737s dive into the ground because of technology, it’ll just take a few of these mass killing events to alert people to the dangers.
LikeLiked by 1 person
Exactly. And that’s why self-driving vehicles will much longer to become mainstream than most people realise.
LikeLike
Yup, anything like that. Insurance companies will spend forever arguing about who owns the risk. I think for that reason there will definitely be a high water mark for any technology, whether it be AI or not.
LikeLiked by 1 person