Interpretability

Interpretability in AI is the ability of humans to comprehend and make sense of how an AI system reaches a decision, particularly for those decisions that are complex or of great importance. By gaining insight into how the system works, when a conclusion is made, and what components and criteria contributed, our confidence in the system at hand increases and so does our trust. Furthermore, interpretability serves to enhance accountability, fairness, and ethical use for applications with an AI system. There are different methods to increase the interpretability of AI systems—visualization, modeling, and real-world application scenarios are some of the most common techniques.

Get started today