Abstract
This special issue brings together seven articles that address different aspects of explainable and interpretable artificial intelligence (AI). Over the years, machine learning (ML) and AI models have posted strong performance across several tasks. This has sparked interest in deploying these methods in critical applications like health and finance. However, to be deployable in the field, ML and AI models must be trustworthy. Explainable and interpretable AI are two areas of research that have become increasingly important to ensure trustworthiness and hence deployability of advanced AI and ML methods. Interpretable AI are models that obey some domain-specific constraints so that they are better understandable by humans. In essence, they are not black-box models. On the other hand, explainable AI refers to models and methods that are typically used to explain another black-box model.
| Original language | English |
|---|---|
| Pages (from-to) | 1427-1428 |
| Number of pages | 2 |
| Journal | IEEE Transactions on Artificial Intelligence |
| Volume | 5 |
| Issue number | 4 |
| DOIs | |
| State | Published - 1 Apr 2024 |
Fingerprint
Dive into the research topics of 'Guest Editorial: New Developments in Explainable and Interpretable Artificial Intelligence'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver