TY - GEN
T1 - An information theoretic view on selecting linguistic probes
AU - Zhu, Zining
AU - Rudzicz, Frank
N1 - Publisher Copyright:
© 2020 Association for Computational Linguistics.
PY - 2020
Y1 - 2020
N2 - There is increasing interest in assessing the linguistic knowledge encoded in neural representations. A popular approach is to attach a diagnostic classifier - or “probe” - to perform supervised classification from internal representations. However, how to select a good probe is in debate. Hewitt and Liang (2019) showed that a high performance on diagnostic classification itself is insufficient, because it can be attributed to either “the representation being rich in knowledge”, or “the probe learning the task”, which Pimentel et al. (2020) challenged. We show this dichotomy is valid information-theoretically. In addition, we find that the methods to construct and select good probes proposed by the two papers, control task (Hewitt and Liang, 2019) and control function (Pimentel et al., 2020), are equivalent - the errors of their approaches are identical (modulo irrelevant terms). Empirically, these two selection criteria lead to results that highly agree with each other.
AB - There is increasing interest in assessing the linguistic knowledge encoded in neural representations. A popular approach is to attach a diagnostic classifier - or “probe” - to perform supervised classification from internal representations. However, how to select a good probe is in debate. Hewitt and Liang (2019) showed that a high performance on diagnostic classification itself is insufficient, because it can be attributed to either “the representation being rich in knowledge”, or “the probe learning the task”, which Pimentel et al. (2020) challenged. We show this dichotomy is valid information-theoretically. In addition, we find that the methods to construct and select good probes proposed by the two papers, control task (Hewitt and Liang, 2019) and control function (Pimentel et al., 2020), are equivalent - the errors of their approaches are identical (modulo irrelevant terms). Empirically, these two selection criteria lead to results that highly agree with each other.
UR - https://www.scopus.com/pages/publications/85102547557
UR - https://www.scopus.com/inward/citedby.url?scp=85102547557&partnerID=8YFLogxK
U2 - 10.18653/v1/2020.emnlp-main.744
DO - 10.18653/v1/2020.emnlp-main.744
M3 - Conference contribution
AN - SCOPUS:85102547557
T3 - EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
SP - 9251
EP - 9262
BT - EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference
T2 - 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020
Y2 - 16 November 2020 through 20 November 2020
ER -