What are the forms of Explainable AI?
Explainable AI is an umbrella term for all kinds of techniques and methods to make AI more insightful. There are several possible approaches. For example, with a technical approach it is possible to make difficult-to-understand AI models understandable afterwards. A second AI model, for example, can be trained on the behavior of the first model. This can then learn to explain the behavior to an end user. There are also other ways to make the results of AI insightful. It is often possible to make the algorithms of an AI system transparent in advance. A good example is the Algorithm Register for the government. This register guarantees the transparency of AI and avoids the often-mentioned black box.
Algorithm Register: traditional vs ML algorithms
An algorithm can be defined as follows: you have input, a defined calculation and an output. This forms the algorithm. With traditional algorithms, the calculation is predefined, with a list of rules or other operations. The Algorithm Register for the government also refers to Machine Learning (ML) algorithms. In this variant, an algorithm, often referred to as a model, is structured differently. Although you still have an input and an output, the calculations are not predefined in rules by the developer. On the contrary, using a lot of data, these calculations can be learned (read: trained). Here, the algorithm is, as it were, drawn up by the computer itself.
How do you make an ML algorithm transparent?
This makes transparency more difficult: the ML algorithm is then also embedded in many separate parameters, often millions or billions (footnote: billions as in the well-known open-source AI models of Meta, for example Meta Llama 3 – 70B). These are not actually interpretable to scale for humans, while rules of a traditional algorithm are. This of course raises the question of how you can still put an opaque algorithm in an algorithm register. One way is to share all kinds of technical details, both about the training process and about the data sources used for training. This could make the ML algorithm replicable, for example. Insight can also be provided into the verification steps that have been taken to measure the correct functioning of the algorithm.
In short, Explainable AI plays an important role in providing trust and transparency, and lays an important foundation for the responsible use of AI systems by end users. In a time where far-reaching personal decisions are increasingly made with an AI system, it is essential for those involved to understand how these decisions were made. In addition, Explainable AI contributes to regulatory compliance and the application of the correct ethical standards. Essential for a responsible and sustainable use of AI in our society.