Explainability

Algorithmic transparency

Definition
The faculty of explaining to the user how a result was reached, or could not be reached
Explainability

How does it work?

Any rule based system has an explicit representation of how each result was reached by a combination of sub-results. By storing these intermediate results and representing them to the user later, a form of explainability is formed.

An explanation can be formed by presenting the rules fired in a human-readable form, or a plain text human readable form of the reasoning can be logged as the algorithm executes. A "reasoning" log.

Examples

  • An expert system can tell you how it reached its conclusion
  • A rule based algorithm can store its intermediary results and present these to the user later in a user-friendly manner

When should you use it?

  • When the results of the algorithm have legal consequences and the legislator demands explainability
  • When the reason why a result was reached is just as important as the result itself
  • When you want to show the expert you work with that the results created are formed in the way they had intended

Problems

  • Neural networks in principle can't explain how their results are formed. But attempts to reach explainability in AI are being made.

Links