Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence

K. P.Suba Subbalakshmi, Wojciech Samek, Xia Ben Hu

Research output: Contribution to journalReview articlepeer-review

Abstract

This special issue brings together seven articles that address different aspects of explainable and interpretable artificial intelligence (AI). Over the years, machine learning (ML) and AI models have posted strong performance across several tasks. This has sparked interest in deploying these methods in critical applications like health and finance. However, to be deployable in the field, ML and AI models must be trustworthy. Explainable and interpretable AI are two areas of research that have become increasingly important to ensure trustworthiness and hence deployability of advanced AI and ML methods. Interpretable AI are models that obey some domain-specific constraints so that they are better understandable by humans. In essence, they are not black-box models. On the other hand, explainable AI refers to models and methods that are typically used to explain another black-box model.

Original languageEnglish
Pages (from-to)1427-1428
Number of pages2
JournalIEEE Transactions on Artificial Intelligence
Volume5
Issue number4
DOIs
StatePublished - 1 Apr 2024

Fingerprint

Dive into the research topics of 'Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence'. Together they form a unique fingerprint.

Cite this