101 Stories to cement your AI Leadership: Episode 2
- vanrompayebart
- Nov 12, 2023
- 6 min read
Updated: Jan 28, 2024
The Dinner Dance - The two types of AI
These are additional shownotes. For the full transcript of the episode, scroll down
Okay, so in this episode I may have taken a bit of a short cut - because there are indeed many more types of AI, things like explicit reasoning, or optimizations of systems or planning. And all those are valuable, and processes do fall back on humans. However, I do think it's fair to say that the two mentioned in the podcast, simpler information handling and ambiguous decision taking, are the predominant ones. And this episode really wanted to provide some reason for everything that has been happening in the last year, which by any measure was a bit crazy, and saw many people jump on a bandwagon of which they hardly understand anything. (And in full transparency: yes, while I do try to keep a zen attitude to the nonsense that flows around in AI, I am still amazed by the great many people that pretend to be experts but still call it 'GTP').
Besides receiving the most attention, the two types also occupy completely different spaces, something which is hardly ever acknowledged to the full extent. Ambiguous decision making and prediction is the type of AI (often done with rather plain vanilla Machine Learning) that most people will know from the last 10 years. This is the type of AI that pushed the EU towards making an AI act in the first place. Its reach includes a variety of things, like predicting which person to hire or not), whether a person will reoffend, which product you're most likely to buy, or which content will maximize engagement on a social network, and much more. And no, this is not to say that everything it does is controversial or negative. I'm just pointing out that most of what you know from AI before 2023 probably sits under this umbrella. To be honest: this field has always been evolving, but I don't think anyone can really remember a revolutionary moment (please correct me if I'm wrong) Yes, better algorithms became available, as did better approaches to peripheral topics like explainability, but we cannot say that there has been shocking progress. For me, this is my home ground. Yes, I've dabbled in the NLP and vision stuff coming up next, like anyone from my generation did, but not too much.
We did have the other type of AI as well, covered by topics like Natural Language Processing (NLP) to analyse texts, and computer vision to analyse images. Looking back from the age of chatGPT to that older NLP, the quality was really poor. This field really has been revolutionized, the quality is incomparably higher than it used to be. While some NLP methods are still applied, I think it is quickly becoming clear that old school NLP is not the way to go anymore (unless if you have really strong computational restrictions), and it is composed of tools and skills that have lost much of their value. Someone straight out of university can build an operational mail routing system by clicking away in Azure for a day or so, an NLP problem that five years ago would've taken a few PhDs a year or so to solve - and then at much lower quality than what you can get today. Of course, this is a relief to me - when I chose my specialization I at least bet on the right horse!
Now, when ChatGPT launched at the end of 2022, professionals in the AI space weren't terribly surprised. After all, the GPT-3 API had been open for quite a while, and people like me had already shown it very enthusiastically to their executives months earlier. But even so, the splash of chatGPT was beyond anything imaginable. It was this watershed moment, and everybody started to desperately grasp at anything to hang on to: what will it mean for my organization, what should my next step be, and so on In fact, this podcast is coming to you with more than a year delay, most of which is due to me, as a head of AI, being overwhelmed with work generated by the launch of ChatGPT.
All this fuss is not a good thing, in my view. In the last year, decisions made by companies regarding AI have become increasingly emotional. Generative AI systems are treated like pop stars, everybody knows their names (LLaMA, GPT-4, Bard), and all the drama around them is discussed very broadly. As Head of AI at a KPMG branch, many people mail and message me whenever anything happens in AI, but it was never more than when Sam Altman was fired and hired as CEO of OpenAI. People have a bad case of FOMO around Generative AI, but are not able to ground their approach into a reasonable technology approach (which turns a solid vision into an ambition and a strategy and then executes that as a well-structured roadmap). But keeping a cool head is hard to do, especially when navigating a space that you don't understand - case in point: many leaders don't really grasp the difference between the two types of AI I discussed.
I do think that understanding that difference is vital, and even technical people often only know one field in depth. In ambiguous decision making, simply training the algorithms to given data isn't very hard, but understanding how to embed context and strategy when creating an analytical approach is vital. Also, you would be building the entire system yourself, which brings other risks. For one, the best data scientists that I know have always been able to refrain from using the most complicated approaches, instead taking a risk/return approach: complexity entails risks of breaking, of faulty implementation, etc. Finally, since the human tasks that are automated by this are typically not simple execution tasks, it is inefficient to simply replace the human intervention in the existing process. It is much better to have a very thorough redesign of the entire process, or even the context of the process. This need is much smaller in many of the information handling tasks, where you can often simply plug in your automated system.
The way they create value is also distinct: automation of information handling often brings efficiency gains to rather simple, repetitive processes like customer support, while the ability to automate ambiguous decisions allows you to bring straight-through automation to even the most unique and valuable processes at the core of your organization: where to invest for an investment company, when to do maintenance for a utilities company, and so on.
However, while the types of tasks for these two AIs are distinct, in the future they will most definitely be used in combination, since the interaction between them will create even more value. Because it is simple: many processes require both to be fully automated. Think about automatic treatment decisions for a cancer patient. This should first outline the contours of a tumor on a radiology image, and then decide on the optimal treatment, something which has a level of ambiguity. Or a windmill operator that flies a drone out to inpect the blades of the windmill, first recognizing cracks in the images, but then having to decide whether these warrant a costly replacement of the blade.
FInally, the ideas in this podcast are related to other work. They obviously owe a lot to that simple prediction Andrew Ng made in 2015 with his one-second rule, saying that any mental task that a typical person can do with less than one second of thought would be automated through AI in the very near future. Well, that future has arrived. And one can wonder just how close these two types of AI align with the "Thinking, fast and slow" division that Kahneman made. I don't think the dividing lines are exactly the same (although I admit I read that book only superficially a long time ago). In fact, many ambiguous decisions in practice seem influenced by 'fast' processes - for example hiring decisions influenced by unconscious racism.
I have to be honest. This episode was hard to construct. The stories behind it are complex and immediately fan out to richer stories, that deserve their own episode. I encounter the same problem with these shownotes. And the illustrative story... Well, I wasn't convinced. I might better have postponed, waiting for a better story, because in this podcast the story is as important as the lesson. But unfortunately, all future episodes should be seen in a clear understanding of the two types of AI. So, for now, I will just be happy that I got to think back to a lovely period of my life, and to people that I love, even after years of separation. :-)
For the full transcript of the episode, scroll down
Coming soon

Episode transcript
Have you ever found yourself struggling to find just the right spot to go for an impromptu dinner with friends? Today I dust off some long cherished memories, and ask myself why it’s so easy to read a menu - but so damn hard to pick a dish. I personally believe the latest fashion to call everything 'Generative AI' is wrong, and even harmful - and I bet a story might help you understand just why!
Picture this - a city center late in the early evening, bustling with passers-by, and then there's that group of five or six young folks, standing on a street corner, their group pulsating, expanding and contracting, taking five steps to the left, and then ... hopping 6 back to the right, perhaps for an hour. What they’re doing? Well, this is a group of brilliant physics students, and no, they’re not acting out the movement of atoms in an electric field, or showing how celestial bodies get sucked into a black hole. A much more mundane problem is keeping them dancing in the street: where should they go for dinner?
Yep, this is in fact about me in my university years, and I can tell you the problem certainly wasn't a lack of dinner options. Ghent, the city where these gastronomical struggles played out, had a true cornucopia of choices: from plain sandwiches to exotic international flavors, from budget-friendly burger joints to Michelin-starred culinary heavens. And it also wasn’t a lack of brainpower holding us back, these were some of the sharpest minds I've ever known, and while physicists may have a reputation for struggling with the practicalities of everyday life, surely students are experts in the art of getting fed, right? So why the hassle?
Well, had this group found itself suddenly in, say, a Chinese city, the challenge would’ve been clear: we would struggle to decide where to eat, simply because we wouldn't be able to decipher all the signs and menus. But such simple information digestion wasn’t our problem, we could all read the signs calling out "Pita Kebab", or "Thai food", and we knèw the neighborhood, we had sampled every place more times than you can count. Our real issue was that our question was in fact ambiguous: What is the best place to eat? For one of us, "best" meant finding a spot with low-fat options because of their diet. For another it meant a hearty, calorie-packed meal, because they were prepping for a night of post-dinner debauchery. And yet another was running low on budget for the week, and just wanted some cheap filler. So each of us considered this in their own unique context, approaching it with their own one-of-a-kind strategy, and the resulting ambiguity is a much different challenge than the reading of a menu card.
Organizations also apply strategy in a specific context, but in fact, both the ambiguous questions ànd the handling of rich information cause headaches. Imagine an insurance agent receiving a claim by email. First, the information needs to be digested, reading the email to extract who the client is and what type of claim is it, to then forward it to the right unit to deal with it. And that unit needs to decide whether to honor the claim, but this involves more than just checking contract terms, and indeed is an ambiguous decision: perhaps you want to honor the claim simply because this is such a profitable client? In practice, because traditional rule-based automation offers very little in the way of handling these two types of tasks, most companies still fall back to humans, leading to costs in terms of efficiency, speed, and scalability, and this is where Artificial Intelligence can help.
The information handling tasks are done by humans day in day out, like seeing, reading, hearing, recognizing. These tasks are generic: the mental processes involved typically take less than one second of mental effort, and they are highly similar in different people regardless of the context like which organization you work for. If you're European, you don't need a new set of eyes to look at a picture taken in Africa, and no one needs to learn to read all over again when they change jobs to a new company. One big reason for this is that the information involved, the data, are highly standardized: images are images, texts are texts, and sound is sound. Finally, discussing the correct execution of these tasks is rather straightforward: an image represents a tiger, or doesn't represent a tiger, even if stylistic variations exist. Interestingly, these information handling tasks almost always have an inverse: you can read an email, or you can write one. You can look at an image of a train, or draw one. You can create computer code for a task, or describe the task a given code performs. This is of course what “Generative AI” is really all about, and its recent rise has been terribly disruptive for all of these information handling tasks, allowing great automation almost out of the box.
The second type of task however, taking those ambiguous decisions, is virtually the opposite of information handling. These problems require real thought and are often deeply unique. Most people never have to predict clogging of public sewers, or set an optimal price for a washing detergent, or identify a tax return as fraudulent. And even when two organizations happen to struggle with the same issue, their solution will differ because of context and strategy. The appropriate mortgage rate for a prospecting bank client will depend on context like the interest rates set by central banks, but also on the strategy of the bank: a bank aggressively expanding its client base will eagerly offer a sharp rate, but one simply focusing on drawing value from existing clients, won’t. These ambiguous problems often rely tabular data with meaning that is highly context-specific. For a mass retail bank a client with half a million euros is a rather big catch, but the same client may not even qualify for joining a private bank, so the same number means a drastically different thing to different organizations. This influence of context and strategy means that decision makers need to adjust their mental processes when changing jobs, and it also often makes it hard to debate 'correctness', after all, the appropriate mortgage rate isn't hard-coded in the DNA of that bank client. Finally, for an ambiguous problem it is often hard to imagine the inverse: perhaps 'describe the type of client that would get this mortgage rate'? But that's rather clumsy, no?
Most processes in organizations include steps of both simple information handling and ambiguous decision making, so to get rich value from straight-through automation, you need to master both, with Artificial Intelligence. Still, based on the above, I cannot emphasize enough the importance of recognizing how fundamentally different the two are: in the types of data they use, in the drastically different approaches, and in the skills they require. Any AI leader that doesn’t account for these differences into practice is setting themselves up for failure. For example, the information handling tasks are so generic that some foundational model from OpenAI or Google can often perform them close to perfection. Hardly any organization really builds such systems from scratch anymore, and the skills your organization really needs involve technical integration and customization, leading some to proclaim that the machine learning engineer has replaced the old-school data scientist. But for the ambiguous decisions, the good people at OpenAI do not know your specific context or strategy, meaning you will still need data scientists to embed your specific needs into tailored algorithms. Also, the risks involved differ, since one relies mostly on generic vendor solutions, while the other is often home-brewed. More and more, these two types of AI are drifting apart, require different governance and culture, and already today teams that do wonderfully well on one type, may fall horribly short on the other.
These two types of AI also follow different paths to value: automation of information handling typically brings big efficiency gains to rather simple, repetitive processes like customer support, while the ability to automate ambiguous decisions also allows you to bring straight-through automation to even the most unique and valuable processes at the heart of your organization: where to invest for an investment bank, when to do maintenance for a utilities company, or what to eat for a hungry student.
In the wake of the rise of chatGPT there's now a fashion to call every type of AI “generative AI” - after all, it sounds great, no? Well, I advise every AI leader to avoid this trend, and to distinguish clearly the two types of tasks: information handling (now often done wonderfully well with generative AI) and ambiguous decision-making. After all, they’re drastically different in the skills they require, in the burden they place on your organization, in the risks they entail, but also in the way they create value for you.
Thank you for listening to this second episode, I hope you enjoyed it, or that it at least got your thinking going. Because, this was just one of a 101 stories to cement your AI leadership, and many questions remain. Luckily, we still have a few episodes to go – 99, if the title is anything to go by. I felt I had to explain these two types of AI this time, because next time, I would like to offer some historical perspective about just how mature the field of AI is - or rather, the two fields of AI.



Comments