Artificial intelligence is fantastic! It’s like having a personal assistant 24/7 at my disposal be that for work or personal purposes! Personally I use AI mostly for work, analysing and reworking large data sets in Excel where AI proposes me code to help me make complex calculations using the Python programming language rather than writing massive nestled formulas. AI helps me do transcripts of meetings and summaries of emails and documents, suggesting conclusions and plans of action. And it helps me prepare at least the framework of presentations, including charts based on existing data.

AI also allows me to understand emails and other documents in foreign languages – and to draft an appropriate answer in the foreign tongue. And where I used to refer to traditional search engines for enquiries, I now ask my AI assistant to find answers to my pertinent questions.

Using AI I have definitely become more productive. But now I start questioning whether I am also becoming more knowledgeable. Just imagine for a minute that you don’t need to learn a foreign language any longer to grasp other people’s (written) communication. And I don’t really need to learn Excel formulas since AI will analyse the data for me. Personally I had to learn how to use Excel spreadsheets the hard way, by trial and error and referring to the manual. But at least I do now understand the functions and formulas I frequently use and know their limitations. As far as languages are concerned, I learnt them in classrooms and by living and working in regions and countries where they were spoken, immersing myself into the local culture.

But then I look at, for example, the young people in my workplace: They are very adept – and probably even more than I am – at using artificial intelligence, but sometimes I wonder whether they actually understand how the information is being processed and whether, if we switched off the AI tomorrow, they would be able to do the same work using purely their own human intelligence. Can they understand and manually summarise a complex concept? It reminds me a bit of my days as a young man when we all were reasonably good at (simple) mental maths – up to the time when the pocket calculator was invented and typing numbers into a device was not only much easier but also allowed much more complex calculations.

And artificial intelligence is definitely not yet up to the task to make decisions as we humans do, mixing strictly logical reasoning with our own personal perception of, for example, a business partner’s risk tolerance based on many decades of experience in the business world. I am a management consultant advising businesses and I do frequently encounter situations where logical analysis of the outcomes of different options leads to one distinct recommendation. But then, being well aware of how my client’s business processes work and taking into consideration his overall acceptance of change and risk, I sometimes recommend a different course of action, knowing that this may lead to less radical results, but since it is more likely to be accepted it will ultimately be the more sustainable solution. How do you get AI to take such soft factors into consideration? Or take languages: How can you check or redact a translation by AI if you don’t master the lingo yourself?

As I said at the outset, I am a great fan of artificial intelligence, as long as it doesn’t prevent us from learning ourselves, be it only to be able to check AI’s outputs, as you would verify a human assistant ‘s work. OpenAI’s Sam Altman already states that ChatGPT is cleverer than any human that has ever lived and says that “…the worst‑case scenario is lights out for all of us.” Now that no doubt is an extreme scenario, but there are already a number of examples of AI taking wrong or biased decisions, such as in 2019 when Apple Card and its issuer Goldman Sachs used AI to fix credit limits and loan eligibility: Customers reported that the Apple Card algorithm gave significantly lower credit limits to women, even when their finances were better, yet Goldman Sachs reps said they couldn’t explain the algorithm’s decisions. Or take Boeing’s MCAS (Maneuvering Characteristics Augmentation System) on the 737 Max: In two crashes (Lion Air and Ethiopian Airlines), MCAS overrode pilot input, causing fatal nose dives where the pilots were unable to regain control, and the system made flawed assumptions based on faulty sensors.

So I dare asking: Does AI make us dumb? As humans we are inherently lazy, taking the easy option whenever we can, and hence there is a danger that in time we will maybe rely more on intelligence of the artificial kind than we should. Like with everything else, a healthy dose of scepticism is in order and ultimately I think it is crucial that humans remain in control of what AI is allowed to do and how the output is being put to use. But within the realm of such reasonable constraints, I think the potential is huge. And if it allows us to spend more time doing meaningful stuff and enjoy maybe a bit more free time then all the better.

6 Comments

  1. My experience with ai is that I often get slightly the wrong answer. Because, I realise, I have asked it slightly the wrong question. I’m not being flippant here. In order to make AI work well for us, the onus is on humans to learn *how* to use AI. That’s easier said than done.

    Liked by 1 person

  2. I agree! The more precise the question you ask AI, the more accurate will the reply be. What AI still suffers from is short term focus: it is good at executing a single short task, but try to give it tasks which involve too many details and steps and the output is disappointing… but, so I am told, that‘s going to change gradually. In this respect GPT 5 due to be released by OpenAI sometime this month is apparently already much improved…

    Liked by 1 person

    1. I think it’d be fascinating to think in terms of having a conversation with ai, whic implies that it has some knowledge of who we are (among presumably many different users), also some memory of previous interactions.

      So we might say something like “give me the report you just gave me, but insert column X between columns Y and Z (and expect a sensible response).

      Liked by 1 person

  3. Absolutely! And as a work colleague has already shown me in some very practical examples this already works quite well, amending presentations, making changes to Excel files etc. to get the most out of it though a monthly subscription to ChatGPT Plus at £/€/$ 20 will be required. I for one certainly am considering getting one…

    Like

  4. I don’t believe AI can make “us” dumb. Lazy, yes,but dumb,no. And now, recognize my use of the word “us.” You. And me. Most of the rest of users, yes, I believe users will never grow, never understand basic conundrums will be trqically moire than simply lazy. Well written. Enjoyed it.

    Liked by 1 person

Leave a comment