Artificial intelligence

Explainable AI explained

August 29, 2024 - 2 minutes reading time
Article by Gerco Koks And Razo Van Berkel

In the fast-paced world of artificial intelligence (AI), technological breakthroughs follow each other in rapid succession. Even for AI experts, it is difficult to keep up with all the developments. However, there are a few themes that are not only trending and hip, but also fundamentally important for the future of this technology. One of those themes is Explainable AI. You can read more about it in this article.

What is Explainable AI?

A short and simple explanation: Explainable AI refers to processes, techniques or functionalities that ensure that the results of AI can be interpreted in a way that is understandable to people. When using AI, the user is given a textual explanation of how the AI ​​system arrived at an answer. This explanation is also understandable for the average user who does not understand AI.

Why is that important?

In general, we can say that human actions and the decisions that people make are influenced by AI. Without Explainable AI, AI systems can determine the amount of your benefit, or for example make medical diagnoses, without making it clear how a system arrives at a result. Cases such as the benefits scandal have shown that this is extremely important. It is the people behind the buttons who are responsible for the choices that are made. And then it is useful for users of AI or an algorithm to understand how the system arrives at a certain answer. This has everything to do with transparency, trust, responsibility and compliance.

‘It is the people behind the buttons who are responsible for the choices that are made’

What are the forms of Explainable AI?

Explainable AI is an umbrella term for all kinds of techniques and methods to make AI more insightful. There are several possible approaches. For example, with a technical approach it is possible to make difficult-to-understand AI models understandable afterwards. A second AI model, for example, can be trained on the behavior of the first model. This can then learn to explain the behavior to an end user. There are also other ways to make the results of AI insightful. It is often possible to make the algorithms of an AI system transparent in advance. A good example is the Algorithm Register for the government. This register guarantees the transparency of AI and avoids the often-mentioned black box.

Algorithm Register: traditional vs ML algorithms

An algorithm can be defined as follows: you have input, a defined calculation and an output. This forms the algorithm. With traditional algorithms, the calculation is predefined, with a list of rules or other operations. The Algorithm Register for the government also refers to Machine Learning (ML) algorithms. In this variant, an algorithm, often referred to as a model, is structured differently. Although you still have an input and an output, the calculations are not predefined in rules by the developer. On the contrary, using a lot of data, these calculations can be learned (read: trained). Here, the algorithm is, as it were, drawn up by the computer itself.

How do you make an ML algorithm transparent?

This makes transparency more difficult: the ML algorithm is then also embedded in many separate parameters, often millions or billions (footnote: billions as in the well-known open-source AI models of Meta, for example Meta Llama 3 – 70B). These are not actually interpretable to scale for humans, while rules of a traditional algorithm are. This of course raises the question of how you can still put an opaque algorithm in an algorithm register. One way is to share all kinds of technical details, both about the training process and about the data sources used for training. This could make the ML algorithm replicable, for example. Insight can also be provided into the verification steps that have been taken to measure the correct functioning of the algorithm.

In short, Explainable AI plays an important role in providing trust and transparency, and lays an important foundation for the responsible use of AI systems by end users. In a time where far-reaching personal decisions are increasingly made with an AI system, it is essential for those involved to understand how these decisions were made. In addition, Explainable AI contributes to regulatory compliance and the application of the correct ethical standards. Essential for a responsible and sustainable use of AI in our society.

Related articles
How do you detect a deepfake?
Digital transformation Artificial intelligence Retail Finance Public Logistic
In this article, you will discover how to detect deepfakes.
Why Centric is experimenting with its own AI model
Artificial intelligence
Centric is building its own AI language model that considers issues like privacy and information security ...
Generative AI: The Maker Archetype and EU-regulation
Artificial intelligence
Read about the impact of the AI Act on the Maker Archetype here.