Taking a deep breath: Implementing AI to the organization
When building AI maturity in an organization we usually start by experimenting with AI. Either organizations get fed up with “just experimenting” or they reach a level of positive awareness that this technology is worth more of their attention. When (especially managerial) attitude gets to a level of conviction that “this might actually seriously help us”, they enter the stage of AI implementation. Getting there to make AI yield benefits has its own challenges, but also successfully implementing one or several AI projects surface new questions and opportunities. This is what we will discuss in today’s blog of the AI maturity journey.
Typical characteristics of the stage
Ideally, experimentation is usually about awareness and small low hanging fruits. These projects usually have extreme ROIs, usually because of small budgets and easy implementation, but the overall magnitude of the projects won’t affect the bottom line of the organization. Usually, it is under the level of “strategic projects”, it is a side note on board meetings (unless experimentation is a consciously managed stage). Implementation of AI on the other hand is generally targeting something critical, that is a business priority and AI promises big efficiency gains and/or increased revenue. A few examples of implementation projects are:
Transforming customer service to allow AI to be the first point of contact (compared to the former task of quality checks via AI)
Changing sales strategy to go after the long-tail of potential customers because of the opportunity of an AI orchestrated digital journey (compared to the former task of to buying lookalike audiences based on existing clients)
Showing banking customer agents what to ask next in real time for best opportunity mapping based on the background of the client (compared to the former task of to emotional analysis of calls)
Scheduling production, maintenance and digging for root-cause analysis based on predictions from the data layer of production (compared to the former task of to implementing a quality checking camera at the end of a production line)
Orchestrating stock management, pricing and refill scheduling in retail based on predicting sales of each item at a certain location (compared to doing a heatmap of where people are going based on security camera feeds)
This does not mean that the projects during the experimentation phase were not useful. But there are a few things common when it comes to implementation:
There is a large phase of the project where data has to be cleaned, and as a side-effect, the organization has to face the quality and interoperability of the data. This often initiates infrastructural investments to support the AI project’s data need.
Critical processes will rely on AI. So, business processes have to be redesigned around AI with the limitations of this technology in mind. These new processes have to build on the strengths of AI models, but at the same time compensate for the general statistical nature of machine learning. The organization and managers will still be responsible for the outputs.
Employees will have to collaborate with AI. AI will not entirely replace parts of the operations but take over tasks (that might be frightening) and require people to learn new skills to trust, use and judge the outputs of AI enabled tools.
In-house expertise is needed. Although these projects can still be implemented with external support, there has to be a dedicated team who can represent client-side expectations and translate between business and technology. Usually there is AI specialist expertise recruited or educated to start building in-house competencies.
The scale and importance of the project require a single-minded focus. Because of the complexity and potential prolongation of the project’s many layers, a strong, high influence champion is necessary to achieve results. If the scope is not defined or expanding to more general AI implementation efforts, there is a high risk of falling apart.
The projects in the phase of Implementation of AI is the closest to the general challenges in the process of implementing software into an organization. Anyone who went through a customer relationship management (CRM) or an enterprise resource planning (ERP) system implementation knows that the software itself is usually the easiest part. Defining the scope is difficult, and the processes must be shaped to fit the software. Changes can escalate quickly, growing into waves that wash over the entire organization. But eventually, it starts working. The special threats of AI enabled software implementation come from four main sources:
AI is fuzzy: Unless the chosen AI is a linear expert system, machine learning solutions will always have a statistical probability of failure. Defining an acceptable failure rate (no, 0% is not an option, but it never is) and/or building processes around AI tools so that they only support people is an extra effort. Compliance or internal quality standard accreditation departments may push back.
Data integration: Realizing how poor certain parts of our data are is a sobering experience. Quality, interoperability, accessibility, sensitivity and even organizational politics (who is inhouse data owner?) can cause headaches. One can easily find themselves in the middle of a big data migration project with new data architecture procurement.
Machine Learning Operations (MLOps) is not straightforward: Building or renting a computational architecture that can support your in-house development will experience huge capacity workloads when training AI and very low intensity periods when you only use your model(s). Combine this with potentially sensitive data that cannot leave your premises, the exploratory nature of AI models (thus several training rounds), and their continuous development, add a flavor of industrial security standards… and you end up with a very significant requirement list.
Artificial Intelligence is a loaded concept: Anyone involved in the project will have to go through a repositioning curve from sci-fi threats to real-life challenges. In many cases, it is also an emotional journey. Unexpressed misconceptions and the barriers these create might lead to resistance caused by are unrealistic sci-fi scenarios or from an overrated hype of the AI technology(we just throw some magic AI dust on everything and suddenly it will work).
Threats and benefits
The most profound benefit of this stage is that AI enabled tools can start to yield results. This may seem trivial, but looking into the Experimenting stage this can be seen as a great accomplishment. Further benefits may include the project becoming a true lighthouse project in-house and triggering more large-scale projects. A good blueprint that can be replicated. But at the same time, the most significant threat of moving forward is that it becomes isolated. During the long-term effort of fighting the ever-popping fires generated by the waves of change and pushing the agenda to get to actual results, it can become a lonely journey. Champions and organizational departments might find themselves branded as “smart but weird”. Despite that the original strategic idea to enable the organization to live with the opportunities of AI, the project can’t infiltrate the other units and strategically they find themselves in a new steady state, stating that “now we have implemented AI”. But neither the other existing opportunities nor the upcoming opportunities that racing technological development brings will be exploited.
What can you do to progress better?
The Implementation phase requires a ton of great project management and change management techniques, hiring/internal education and specialized procurement that might be more straightforward. We picked two components that might not be trivial to do or how to do it:
Exploratory data capital evaluation project: In this stage the state of data that is going to be affected is critical. The biggest surprises might come from this domain when everyone is excited about the soon-to-be shiny technology. It is wise to start a discovery project that follows through the processes that are going to be or might be affected and look into how useful data is. For example, is the data stored in one system or in several connected systems? Are there clear connector entities (e.g.: everyone uses the same ID for a client)? How much of the data is manually added? Are they well managed? Are they stored? (Sometimes data supporting an ongoing process might not be saved as it has done its job.) Do WE have access? (Sometimes data is collected by production machines, but they are at the producer of the machine…)
Conducting data by searching, interviewing, and asking for data can be very eye-opening and resulting in a data map of the targeted domains. In most cases this is surprising in both directions: “we were sure to have reliable data” and “we didn’t know that we already collected this data”. This method could be a first step to a broader data capital evaluation and raising awareness.
Slice to sell: Rather than aiming for a big result it is important to have useful milestones along the way. When old-fashioned engineers start to design the project, they will start by implementing the data architecture, data cleaning, modeling, testing and then implementation. But there might be another way of slicing that could generate results sooner. It might only be an analytics board. Or by reducing the scope a simple recommendation engine might be sufficient instead of implementing automation. These smaller milestones are very important opportunities to get the necessary support to continue AND at the same time allow future users to accommodate to the new opportunities. Agile methodology is going to be your friend in managing the process, but it is an art at the beginning to be able to find the right slices that have a user and add up to the overall giant result that you must build. This is somewhat like the opportunity map we described in the Experimentation phase, but now the small user-stories have to result in a desired concept.
The road ahead
When managers in organizations that implemented AI complain they complain about how isolated the project is. They may even have an AI center, that comes up with brilliant ideas, but it can’t find its way to the business. The next stage of the journey is to enable the organization to use AI powered tools across all domains of the business. This is a very different goal than a one big technical implementation push. This is the duality of building enabling technologies for everyone and of creating an attitude and skillset to use them. Our next stage is going to be the Data-driven organization.