Can AI be explainable? I seriously doubt 100% AI explainability. Maximum we can expect is to be understood as much as human expert opinion is. And the latter is far from 100% explainable. Let me explain.
ML and AI, in general, are taking over more and more activities that humans were able to do satisfactorily for decades/years. Driving cars, for example. Still, there are some scary practical implications around this. When a driver makes some traffic accidents, he is one that is responsible. When AI-driven cars have a traffic accident, who is accountable? Of course, somebody should be pointed to, but who precisely? The team behind the neural network and the software, in general, that is driving the car?
In this article I’m talking about XAI, standing for explainable AI.
What is XAI?
Few links about the toppic:
Can AI be explained?
Lets put on the other way: if an AI-driven car makes some traffic accident, how did this happen? It is practical to know this, at least in order to avoid such situations in the future. Now, the real question is: can the answer be found?
AI is nothing but a bunch of numbers that represents some model implementing some algorithm. The algorithm is a mathematical construction and its logic can be explained, but that is not enough. Numbers are a compressed representation of the data that is used to train the model. Even they are a couple of billions, still, they don’t represent the whole information used for their generation, therefore, they just compressed it with lossy compression. Now, are responsible for given decisions? Yes, of course. But …
Why does this matter …
In some cases, decisions have to be explainable, at least from the legal point of view, in order not to be abused, for example when there are financial consequences (at least). Without this, decision-makers left unquestioned may abuse their positions. This is about humans.
Let’s say you use some very heavily trained and very sophisticated neural network that makes decisions. It is a fact that at the time being, you, or anyone else can not provide an explanation of why this neural network made some decision it just provided. Sounds scary, isn’t it? But …
And why we should be aware about the limitations of the AI
Let’s say you use such an AI beast to make decisions about some images. Image recognition, sentiment analysis, etc… Let’s say you ask such an AI beast is Mona Lisa smiling or not. Whatever sentiment decision it makes, it is not relevant. This is true for an expert opinion as well.
And Finally the Point
The point is: does it matter anyway? Because, even the human can’t explain this, in a situation when a lot of experts have different answers to this question. Therefore, is it a fair game to ask machines why did they make some decisions? In the old days of ML, with decision trees and linear regressions, it was (and still is) doable to explain the decisions of the trained ML model. Today, having a couple of billions of numbers that represent the model, it is impossible. In the same way that it is impossible for a human to explain exactly and mathematically why she thinks that Mona Lisa is smiling. Or not smiling.
At the Bottom
I found a book about XAI. I plan to read it, and I recommend it to anyone interested in modern AI trends:
Also, there is an online event by the author about XAI. Go on and register for it on:
And have a nice time watching the event. I will be there.