101 Stories to cement your AI Leadership: Episode 1
- vanrompayebart
- Sep 17, 2023
- 6 min read
The worst waiter in the world - no action = no value = no sense
These are additional shownotes. For the full transcript of the episode, scroll down
Getting started is the hardest thing, I must say. Which story to lead with? Well, I like this one, simply because it is the most important one. AI and data science are just means to an end, so the most important story cannot be about AI itself: what it is, how it behaves, what to look out for,... It should be about the fundamental goal you are trying to achieve. And for me that’s easy: you want to create value, and to do that, well, you need to take action. Therefore, building AI systems where there is any ambiguity about the actions they generate, or where the impact of those actions is reduced, doesn’t make sense.
This baseline is surprisingly controversial for some. I often hear arguments like: “But we cannot just trust those AI systems, right?” Or: “But they are too expensive to build!” True, those are all attention points, but the essential message is the shift of the burden of evidence that was mentioned in the podcast. If you are truly committed to transforming your business and believe that the future is digital, then digital should be the norm. Trust, for example, should be established by creating proper support systems and governance. It is easy to come up with reasons not to go straight-through digital, but keep in mind: your competitor might do it. Or some startup might do it and turn into Amazon, who thought that sales and delivery could be automated. Executives are paid to prepare their organization for the future and cannot afford to be too conservative. More and more, this means they have to ruthlessly identify bottlenecks that stand between the organization and the value, and then eliminate them. And those bottlenecks are not the humans, but yes, they are often the human steps in our processes.
One simple but concrete example is found in document handling systems. The AI system first extracts the text from potentially scanned documents, and then extracts information to identify and execute the appropriate action. If this process is fully automated, often most customers can be helped in the blink of an eye, and at substantially reduced costs for the organization. However, all the value of such a system can be destroyed by insisting on one simple thing: that it only serves as ‘insight’ to a human, or even just that a human checks whether the action is appropriate. This would
require the human to read the document, causing delays and again increasing the human workload, limiting the scalability during busy times,… Organizations should realize there really are better ways to create trust and checks and balances in such a case, and consequently putting the burden of evidence on the human element might avoid the destruction of value.
In practice, the focus on driving actions is often missing in various ways. In the past, I’ve often been asked to build systems, but when I asked how they would be used exàctly, people couldn’t tell me. Or organizations fail to adapt their processes around the solution, leading to suboptimal business outcomes because the actions are not executed optimally. Surprisingly, it is often left to those building the AI system to clarify its later use, even though they are often technical experts with little context knowledge, and they will most certainly not be the ones using it in their operations. As a result, solutions can become stuck in the proof of concept phase because they do not fit the exact need, because nobody got the buy-in from later users,… At the center, this is something good leadership could solve by instilling an action-oriented mindset in the organization, and by creating a governance that focuses on practical actions.
Another symptom of not critically considering the role AI plays in your organization’s activities is the lack of measurement of the value of your AI. I have met with dozens of organizations who could not tell me the value they reaped from their AI systems. Sometimes this is due to problems in measuring, but more often it is simple lack of measuring, and even more often than that, it is because the organization doesn’t have a definition of value. And creating such a definition, well, that really starts by understanding how the systems lead to action. In this respect, it is revealing that the exact value of dashboards is often shrouded in mystery, and post-deployment value measurements are rare. The root cause of this is often the lack of understanding of how exactly the dashboard has impacted operations – something which is inherent to the ‘insights’ paradigm.
The mere number of dashboards might even provide a measure of data maturity: an organization with too few dashboards may be immature, but one with too many may also be immature. Many organizations suffer a real overload of dashboards. I once worked with a mid-sized organization that started counting their operational dashboards, but simply stopped when they reached around 8,000, with no end in sight. And nobody could really say what the value of all these dashboards was, or even whether they were still being used. And contrary to what people think, dashboards do not come cheap; in another organization, the total cost of ownership of an average simple dashboard was estimated to be between 30,000 and 40,000 euros. And if ‘citizen data science’ is seen as a cheap workaround, think again. Because data provides just another language to describe reality, and like any language it has many subtleties that may not be understood by everyone. In every organization it is easier to get data that is lacking in at least one respect, than it is to get correct, complete, up to date, relevant data. And every data analysis or even simple aggregation still requires interpretation, yet another problematic hurdle for most. Exactly because citizen data science is cheap it is used for less important matters, and fewer safeguards are applied in the creation process compared to e.g. the development of AI algorithms. Typically little effort is taken to adapt or monitor operations appropriately, to educate users, to follow up on quality of output, maintenance costs are not anticipated,... As a result, the tools created through citizen data science risk becoming “zombies” - numerous, toxic zombies that slowly poison and weigh down the organization. In the end, they may just create stronger and more pervasive noise for those trying to read the signals to steer operations effectively.
Despite these previous comments, dashboards and business intelligence (BI) are vital tools for successful organizations. While I may have seemed negative about BI, my intention was to highlight that its true purpose is not always kept in mind, and that it should not be the default design pattern in a future-proof organization. The proliferation of dashboards is often due to a limited view of what it means to be “data-driven”. It is not enough to ‘include some data in your decision process’. A future episode may delve deeper into this, but at a minimum, it should also involve a reflex to exclude personal assumptions and interpretations from the process. In this sense, dashboards should strive to be objective, presenting hard facts and measurements rather than attempting to convey insights, which are essentially readings and interpretations of facts. Usually the analysis is not rich enough for such interpretation anyway, giving questionable guidance concerning needed actions. And besides this, BI merely strives to better inform tasks executed by humans, meaning many value-generating mechanisms (like reducing throughput time, reducing human workload,…) do not or almost not come into play. With AI, we should really aim higher with our ambitions, and strive to build systems that can execute the tasks themselves.
Finally, there is one more benefit to having actions driven directly by AI systems: it creates clarity about responsibilities and makes it easier to provide safeguards. When processes involve both human and AI contributions, governance can become complicated: what is each’s responsibility? Where and how should safeguards be implemented? With AI it is clear that the needed safeguards should be built by technical specialists with a specific scope in mind. However, if that scope only covers part of a process, governance that was good originally might become insufficient once the human steps of the process start to evolve in interaction with the AI. As a common example: when AI systems only provide suggestions to humans, the assumption is that the humans will inherently double-check the AI output. This will decrease the emphasis on creating other potential safeguards, but typically over time humans will learn to trust the AI and lose their critical reflexes, effectively leaving the process without any meaningful safeguards at all. This is just one manifestation of the so-called ‘automation paradox’, a very common paradox when humans collaborate with automated systems, often with grave consequences.

Episode transcript
When people think about the least effective colleagues in their team, they often think about their inability to take action. In this first episode, we’ll explore whether the same should not be said about all the smart computer systems our organizations build and use: are AI systems that don’t lead to action as powerful as they could be? I personally really don’t like the word ‘insights’ when describing what AI delivers – and I bet a story might help you understand just why!
I am certain that many of you, when you were young, have spent whole summers, weekends, and perhaps regular evenings waiting tables in a bar, or a restaurant. Now imagine for a second that you are the manager of such a bar, and you have hired some young student to wait tables for you. It is a very busy evening, so you’re tending the bar while the student is going around checking on tables, and at one point he – or she – walks up to the bar and says: “Hey.” You look up, what’s up with this guy, you’re busy, right? And then the student says “you know, I’ve been over at table 6”. Uhm, ok-ay?? See, now you’re getting a bit annoyed, get to the point man! So the student says: “Well, I’ve noticed the people at table 6, euhm, their glasses are getting empty, so probably you should go and ask if they want a refill?” Oompf! Tell me, would you not want to fire that guy? Really, you hire these students to dò things, to actually serve the customers, not just to walk around and observe things, correct?
Well, now consider the same situation, but the waiter is not a person, no, it’s a robot. Would you then still be as annoyed? Surprisingly, in our daily lives and our businesses, we behave as if we don’t care all the time. What that young waiter is giving you, is an insight. He’s diagnosed the situation perfectly, and explained it very clearly to you. But why you don’t like that, is because you have hired him to also take the next step, to take the needed action. Only if the waiter takes action, value is created for you. With our smart computer systems however, we are perfectly happy to limit them to the insight alone, for some reason. There’s a whole field that thrives on this, which we call business intelligence. In business intelligence, you typically get a dashboard that visualizes some more or less simple analysis, and then the human is supposed to understand and take the appropriate action, but in a great many cases, this just doesn’t make sense. Because why would you not demand more from your systems, why would you not have them take the needed action directly? Many people in business still want to pass through a human channel to explicitly decide and act. But often, if you build a smart system that can access and digest more information than you as a human, a system that can decide free from emotion or distraction, and that can do so a million times per second, then what does that human still have to offer that is irresistible? Like the waiter, computer systems only create value if they lead to an active intervention, and in that ‘insights’ paradigm, where you digest some data and then feed it to a human to decide and act, you really introduce three major risks in your process: first, the action doesn’t get taken. The bartender may forget the information about the empty glasses, he may not recognize its significance, or perhaps he simply disagrees with the conclusion. Second, the action is taken too late. By the time the bartender gets around to listening to the waiter, and then finds the time to serve the customers, they may have left. Without paying. Mmh. Third, there is a risk that the wrong action gets taken. The bartender misunderstands and looks table 16, not 6, like so many dashboards get misunderstood every day again. Or perhaps he doesn’t believe the waiter, like that time when the people at Chernobyl decided not to trust the readings on their console. Or perhaps he simply disagrees with the conclusion. Whatever the reason, it is clear: in deciding and acting, humans often get in the way, leading to error-prone processes with issues of speed and scalability. With well-designed AI systems, we have the opportunity to avoid this, and in AI we should strive for a system that fully automates the task at hand, rather than merely facilitating it. This also opens some of the most common paths to value for AI: the ability to make processes straight-through digital, allowing to deal with higher volumes, and doing this at lightning speed, while also avoiding problems when volumes vary over time. Therefore, the burden of proof should be on including human bottlenecks in processes, and only allowing them if there is some clear and compelling reason to do so. As an example: setting a customized price for a customer is one of the most important aspects of commercial activity, and
if you have a system to do this automatically you can have efficient online sales – so why would you still want a human to okay that price, taking away the benefits automation gives you?
Don’t get me wrong, business intelligence certainly has its place in any organization, but should not be the default way to go in a future-proof organization. Building automated systems is expensive, and those systems can only create maximal value for you if they lead to actions. So while business intelligence strives to better inform tasks executed by humans, AI should strive to execute the task itself, and as an AI leader, you should push the whole organization to make the distance between the AI system and the action as small as possible, and this focus on the practical action should be ever-present, during strategy setting, when devising the governance of AI, during projects building AI systems, in your anticipation of needed change management, and in so many more things. So my personal baseline that I advise every AI leader to use as often as possible, is: no action equals makes no value equals makes no sense.
Thank you for listening to this first episode, I hope you enjoyed it, or that it at least got your thinking going. Because, this was just one of a 101 stories to cement your AI leadership, and many questions remain: under what circumstances can we trust AI systems with fully automated actions? Where do we get good ideas to apply AI? Luckily, we still have a few episodes to go – a 100, if the title is anything to go by. Next time, I’ll tackle another big topic: is AI really just one thing, or should organizations perhaps realize that there are vastly distinct types of AI?



Comments