Sophia Zitman | 6 January 2022

Explainable AI (XAI) in real life

If you do anything AI-related, you’ve most likely heard about it: Explainable AI (XAI) in real life. Often paired with words like ‘black box’, ‘transparency’, ‘fairness’, and ‘very important’. Google searches on XAI lead to methods like SHAP and LIME, and again, posts on its importance. What I find to be the most interesting search result, is the lack of hits on XAI in practice. Given the hype around XAI and its huge potential in the real world, it feels paradoxical that it is rarely put to use. By writing this blog I want to start filling that gap on XAI IRL by sharing a project I have done. Within this project, I have developed an explainer for a client (Yes, it is live! Yes, it is being used!). If you want to know how that happened, keep on reading!

A little context: the project

Before we take a deep dive into the development of the explainer. I want to introduce the client and the project so far. Knowing the context will make the next sections easier to understand. For privacy purposes the names of the company and employees are fictional. However, the description of the use case and our way of working are real.

Our client is Magazine Solutions, a company that buys all sorts of magazines from publishers and sells them to stores in The Netherlands. They are a middleman and provide all sorts of services: from adding wrappers to advising stores on what magazines to sell. In the purchasing department, a team of 3 employees goes through 300 – 400 magazines each week and makes an estimate on how many of each will be sold to stores. This is not a durable way of working; it’s very labour intensive, and it’s hard to find people who want to take over the work once the current employees are retired. 

ML model

We were able to make a model that predicts the sales of each magazine for the purchasing department. This is a classical ML model that makes use of all sorts of metadata: from topics within an issue to sales of the publisher. It runs every night and makes predictions as new data comes in. This model will not replace the 3 employees overnight. Instead, the predictions will be used to support the 3 employees. They will see the prediction and can overrule it if they disagree. This is where the explainer comes in! Understanding the model is very important when interacting with its output so frequently. So alongside the prediction, the employees will also see a local explanation. These explanations will improve employees’ understanding of the model, and therefore their trust in it, and enable a useful feedback loop.

Interested to see more projects/use cases? Check out our cases page!

Making the explainer

Understanding the use case around the explainer is important. Just like an ML model, it fits a specific purpose and should be made to fit. The explainer is the bridge between the technicalities of the model and humans. Hence, you need to understand both worlds in order to make that bridge sturdy. In these next sections, I want to talk about how I tackled the human and the technical side of this project.

Designing the explainer

Explanations are very personal (I highly recommend this youtube series where this is illustrated perfectly.) Explaining the model’s predictions to a colleague not on this project is different from what I say compared to a colleague on the project. When I imagined explaining it to the employees from the purchasing department, I concluded that I knew too little about them to design an explanation that would be useful. I’ve never had to give local explanations to a trio of non-technical 50+-year-olds. As this was crucial information (giving a bad explanation can be worse than giving none at all), I planned a 1,5h meeting with them.

The goal of this meeting was to design the appearance of the explanation together. Simply asking, “hey, what would you like the explainer to look like?” was not going to work. Instead,  I structured the meeting in such a way that I could gather all the bits and pieces of information I wanted.

The case

I started by explaining the purpose of this meeting and that it should be lighthearted, fun, and open. Every thought and comment was welcome. To get out of the magazine-focused mindset, we started with a completely unrelated case. I gave one of the employees a sheet of paper with certain factors that explained the temperature in the room. I asked her to explain to her colleagues why it was 20C in only one sentence. It was possible to sense that this was a bit of a weird request for them, but once they started explaining and the others went along, the atmosphere became positive and fruitful for the next cases.

All the cases that followed were interactive and put me in the position to observe. They started looking more and more like our real case and along the way I figured out what type of information feels natural to them, and how it should be presented. Nearing the end of the workshop, I summarised my observations. They agreed with my findings, and together we made a sketch of the explainer they wanted to see with the output. 

Writing it in code

Coding the explainer was the most straightforward part of this project. Once you know exactly what it should look like, it is only a matter of picking the proper XAI technique and adapting the output so it fits the design. Obviously, there were technicalities I had to think about. Think of the explainer treating all the dummified features individually while that is not the way they should be presented in the final output, and that the model was trained on a log scale, hence making the interpretation of raw output from the underlying explainer a bit less intuitive. But those were all matters that could be resolved.

Why this worked

Once the explainer was live, it functioned the way it should. It fulfilled the users’ need for understanding and served as a solid base for giving useful feedback on the predictions. I compared this project to others I had heard about and asked myself: “Why did this one work out, and others not so much?” I believe that this project was a success as we took a holistic approach. We saw the explainer as much more than just a technical feature. We understood that the explainer served a crucial role in the new way of working, and designed it in that way. This was a human-centered project in the first place, and an innovative AI projected secondarily.

Want to do this yourself!?

Has this sparked your interest or inspired you to do something similar? Check out our workshop on designing explainers for the real world. We can help you get hands-on experience with user-centered thinking when it comes to AI. All of our workshops are based on actual XAI cases within certain fields of work. Which is making the workshop highly relevant and relatable. (this event took place on 20th of january 2022) 

Check out our product company that focuses on making AI explainable: Deeploy – making AI explainable. 

 

Want to stay updated?

Please fill in your e-mail and we'll update you when we have new content!