A method for automating token causal explanation and discovery

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Explaining why events occur is key to making decisions, assigning blame, and enacting policies. Despite the need, few methods can compute explanations in an automated way. Existing solutions start with a type-level model (e.g. factors affecting risk of disease), and use this to explain token-level events (e.g. cause of an individual's illness). This is limiting, since an individual's illness may be due to a previously unknown drug interaction. We propose a hybrid method for token explanation that uses known type-level models while also discovering potentially novel explanations. On simulated data with ground truth, the approach finds accurate explanations when observations match what is known, and correctly finds novel relationships when they do not. On real world data, our approach finds explanations consistent with intuition.

Original languageEnglish
Title of host publicationFLAIRS 2017 - Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference
EditorsVasile Rus, Zdravko Markov
Pages176-181
Number of pages6
ISBN (Electronic)9781577357872
StatePublished - 2017
Event30th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2017 - Marco Island, United States
Duration: 22 May 201724 May 2017

Publication series

NameFLAIRS 2017 - Proceedings of the 30th International Florida Artificial Intelligence Research Society Conference

Conference

Conference30th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2017
Country/TerritoryUnited States
CityMarco Island
Period22/05/1724/05/17

Fingerprint

Dive into the research topics of 'A method for automating token causal explanation and discovery'. Together they form a unique fingerprint.

Cite this