top of page

Using AI experimentation for organizational learning

Building organizational capabilities to live with the opportunities AI brings for companies is becoming essential. Companies on different maturity levels will need to focus on different efforts to be able to overcome the challenges each stage holds and unlock the potentials the next stage can bring. In our experience, one will have to respect the general nature of change that can happen within an organization, no matter how big the urge is to accelerate the improvement process or where they start.


In this post we are going to go deeper on the first stage of our maturity model: Experimenting with AI

Experimenting with AI

We focus on its typical challenges, typical benefits and what you can do to accelerate and facilitate progress in this stage.


Typical characteristics of the stage

The first stage to encounter AI as an organization is the methodology of POC-s (Proof of Concepts). In a typical example, there is interest in several departments of the organization, people are exposed to news, supplier proposals or their own curiosity and would like to understand the impact and use cases of artificial intelligence for their daily tasks. At this stage people within an organization usually become aware of specific tools which can be integrated into their operations, leading to increased efficiency. An automated email sorter, a transcriber for customer service, a fraud detection system, a lead generation tool, a social media optimization application to name just a few examples. They are either off-the-shelf tools that come with a pretrained model, or small-scale custom solutions. Oftentimes, there is suspicion around the validity of business cases which urges the people involved to be cautious. At this phase it is hard to differentiate hype from actual business cases and there is often a lack of clear understanding of what AI can do. Exaggerated expectations (it should solve what a person could solve, after all it is artificial intelligence) and unreasonable skepticism (this is just a hype we shouldn’t be dragged by it) live side by side, thus a lot of confusion in presumptions make communication about the topic very hard.


Individual projects try to live up to the expectations, usually without coordination. Sometimes in this stage there are a few data scientists already who are bombarded with requests that they can’t deliver. These few selected people might not be familiar and even feel overwhelmed with the growing range of AI-related tools and use cases. These experts might feel pushed around with shallow professional standards, yet they are not provided enough resources like tools, data or guidance and often end up leaving. It makes it even harder with a lack of serious management buy-in, as executives first want to see results to prove investment decisions, which could end up in a vicious circle.


Threats and benefits of the stage

The most typical threat is that participants confuse POCs with delivered software and expect organization scale results from very limited investments. Furthermore, they often find themselves with several uncoordinated projects challenging the central IT architecture on all fronts with the off-the-shelf solutions or custom developments. Huge enthusiasm of local evangelists can quickly turn into bitter realization of the slow speed of progress. This is especially hard with use-cases that are widely promoted in the news as successful, but they rely on internal data sources that are not yet managed.


The most important benefit of the experimenting stage is that it creates the general awareness of the opportunities in AI and the sober realization that at the end of the day this is (again) not the silver bullet. Starting to understand the prerequisites of delivering business value with AI maps the key domains where investments are needed, and a more realistic set of expectations towards what the technology can bring. Frictions that are surfacing in experiments start to create a shared language of the basic concepts which enables more fruitful discussions. Ideally a few projects end up being successful and are used as lighthouse projects and that can prove the momentum for investing more towards implementation.


What can you do to move faster in this stage?

Create a map of opportunities: channeling curiosity and excitement across the organization to map and evaluate potential AI use-cases can give a good grasp of this divergent phase. Use-cases can come from in-house brainstorming or business hackathons, benchmarking, consultants or frankly just focused googling. There are several tools out there to evaluate a case, like the machine learning canvas, but in this phase the more important thing is to make it simple and diverse. AI is a general purpose technology, meaning that it will be able to contribute to most processes from sales and marketing through HR to the core, industry specific business processes.


A good way to start is to fill out five columns for each case:

  • Business problem: what will it solve, who will be the beneficiary?

  • Technology: what technology does it use (time series predictions? Natural language processing?...)

  • Data: what data does the technology need to work? It is worth considering what training data it needs (maybe it is provided by the supplier, maybe it is our data) and what operational data it needs (once we run the trained model what input does it need?)

  • Business value: an agreed upon scale (from 1 to 10 or the Fibonacci scale) that makes projects comparable without having to disclose full ROI calculations

  • Technical complexity: the same scale but now it reflects how hard it is to develop and deploy the technology and gather the necessary data

Such a structure helps the brainstorming, grounds expectations and enables coordination. The strange thing in this phase that it is usually better to go for the lowest complexity projects first regardless of the business value. A typical mistake is to aim for high business value, high complexity projects without having realistic capacity and buy-in in the organization. Usually because in this phase complexities are usually underestimated.


Coordinate and nurture AI evangelists: in the experimentation stage you most probably have hidden champions across the organization. An enthusiastic board member, someone from finance with strong statistical basics, someone in production who is keen on following the latest news in production, someone in marketing who has a background in data-driven tools, but they never asked… With a few short training sessions around the organization and a framework for a new role of becoming an AI evangelist these people will surface. They can become the backbone of your coordination and education efforts across departments. They can collect and coordinate ideas across teams if provided with professional guidance. They can discover synergies, share success stories, help dissolve misunderstandings… and they are usually super excited for the new role.


The road ahead

There is a chance of being stuck in experimentation. In this stage usually the organization is not willing to make big investments of finance and human resources, but they are even more reluctant to transform their processes, learn new skills so that AI could deliver its results. If there is enough momentum, buy-in and low hanging fruit lighthouse projects, the organization can get to the next stage of implementing AI into one or a few business processes that will come with their own challenges.




bottom of page