Nick Jetten | 6 January 2022

Auditing an AI-First company; 6 topics to include in AI Due Diligence

“We are an AI-First company”, “Our self-learning technology ensures…”, “Our unique AI driven software…”. These kind of claims are often made on websites of startups and scale-ups. After doing 30+ audits in the Dutch and European start-up and scale-up landscape we learned that these claims can definitely be true. A growing number of scale-ups developes promising inhouse AI. AI that directly contributes to the value of their company. But we also see the other end of the spectrum; false promises on technology which does not exist (yet). From an investor/venture capital perspective this raises a number of important questions when they consider to acquire or invest in such a company;

  • Is the AI claim true; did the company actually develop this AI solution in-house (or with partners)?
  • Does the AI already create business value of some kind?
  • Will this AI also scale with the scaling journey of the company?
  • Does the AI provide the company a competitive edge?

A growing number of investors have these questions when participating in seed rounds of scale-ups. The only way to answer these questions and feel confident about the investment is by auditing the developed AI/ML, the responsible team and check if the AI roadmap is in fact aligned with the high-level strategy of the company. This article describes the 6 topics that should be included in every AI due diligence: a tech due diligence with a strong focus on the AI topics.

1. Roadmap: ML models are in the end a simpler version of the world that they are functioning in. But if the context changes (i.e. different country, different customer segment, different or new regulation) the performance of models decreases or worse; models stop making useful predictions at all. Therefore, it is advised to check if the AI roadmap is in sync with the business roadmap. If expansion to Germany is on next year’s agenda of a medtech scale-up, but there are no data samples yet from the German market and the regulation is totally different compared to the current market, question is if the AI scales at all and what time (and hence investment) is needed to scale.

2. AI Algorithms: the majority of ML algorithms used at scale-ups comes from open source (often Python based) packages which are openly available, not just to this scale-up. The competitive advantage that companies can create is often in the total algorithm pipeline. Think of the uniqueness of the used dataset, the time that went into data engineering and preparations, the design of a complete chain of ML models rather than one single model. All this together should be assessed to truly determine the competitive advantage that the company created.

3. Infrastructure: realising successful ML in production is easier when the greater tech environment is a modern tech infrastructure. When chances on technical debt are minimised, ML development also benefits. There is a decent and well thought data-model, the infrastructure is well documented and the infrastructure is modern in terms of following cloud environment and DevOps best practices. A micro-services setup also makes it easier to integrate ML services into the tech stack and iterate on these services.

4. Data quality: especially when AI plans are still in the early phase, it is important to assess whether or not the data quality is sufficient enough to start building models. There is not one strict rule for a needed minimum sample size; so depending on the use case assess what the available amount of data is and the value that this brings. Also assess if there were any structural breaks in the past. Scale-ups often had some technical changes in the early years, potentially providing problems in mapping data from old customers or transactions.

5. AI Team: in a modern scale-up an in-house AI team should not just be some data scientists. The common pitfall there is that models are created locally and never pass the proof of concept phase. Or; the data science team hands over their local experiments to software engineering teams and wish them “good luck” with productioninzing the code. The AI team should consist out of data engineers, ML engineers and data scientists that together know how to design, develop implement and optimize ML models. Only this will ensure future business value from ML applications.

6. AI Operations: especially when running multiple models in production, a new challenge arises: the one of AI operations. During an AI audit it is important to check both the process and tech side of productioninzing AI models. Do the teams use one similar architecture for different use cases? How do they monitor and optimize models? How do they ensure that models meet a growing number of requirements in regulated markets like finance or healthcare? When there is no answer to these questions, scaling the number of models will definitely become a red flag in the future.

And what about red flags?

Red flags should be thought off as risks hat might seriously cause harm for long term business value, unless solved within a short period of time. Red flags do not always mean investing is a bad idea. They are useful inputs for the development roadmap after investment to mitigate these found risks and this way ensure long term business value of AI.

 

Check out our ML audit at https://enjins.com/audit/

Want to stay updated?

Please fill in your e-mail and we'll update you when we have new content!