TY - JOUR
T1 - TVE
T2 - 41st International Conference on Machine Learning, ICML 2024
AU - Wang, Guanchu
AU - Chuang, Yu Neng
AU - Yang, Fan
AU - Du, Mengnan
AU - Chang, Chia Yuan
AU - Zhong, Shaochen
AU - Liu, Zirui
AU - Xu, Zhaozhuo
AU - Zhou, Kaixiong
AU - Cai, Xuanting
AU - Hu, Xia
N1 - Publisher Copyright:
Copyright 2024 by the author(s)
PY - 2024
Y1 - 2024
N2 - Explainable machine learning significantly improves the transparency of deep neural networks. However, existing work is constrained to explaining the behavior of individual model predictions, and lacks the ability to transfer the explanation across various models and tasks. This limitation results in explaining various tasks being time- and resource-consuming. To address this problem, we introduce a Transferable Vision Explainer (TVE) that can effectively explain various vision models in downstream tasks. Specifically, the transferability of TVE is realized through a pre-training process on large-scale datasets towards learning the meta-attribution. This meta-attribution leverages the versatility of generic backbone encoders to comprehensively encode the attribution knowledge for the input instance, which enables TVE to seamlessly transfer to explain various downstream tasks, without the need for training on task-specific data. Empirical studies involve explaining three different architectures of vision models across three diverse downstream datasets. The experimental results indicate TVE is effective in explaining these tasks without the need for additional training on downstream data. The source code is available at https://github.com/guanchuwang/TVE.
AB - Explainable machine learning significantly improves the transparency of deep neural networks. However, existing work is constrained to explaining the behavior of individual model predictions, and lacks the ability to transfer the explanation across various models and tasks. This limitation results in explaining various tasks being time- and resource-consuming. To address this problem, we introduce a Transferable Vision Explainer (TVE) that can effectively explain various vision models in downstream tasks. Specifically, the transferability of TVE is realized through a pre-training process on large-scale datasets towards learning the meta-attribution. This meta-attribution leverages the versatility of generic backbone encoders to comprehensively encode the attribution knowledge for the input instance, which enables TVE to seamlessly transfer to explain various downstream tasks, without the need for training on task-specific data. Empirical studies involve explaining three different architectures of vision models across three diverse downstream datasets. The experimental results indicate TVE is effective in explaining these tasks without the need for additional training on downstream data. The source code is available at https://github.com/guanchuwang/TVE.
UR - http://www.scopus.com/inward/record.url?scp=85203845404&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85203845404&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85203845404
VL - 235
SP - 50248
EP - 50267
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
Y2 - 21 July 2024 through 27 July 2024
ER -