TY - GEN
T1 - Pheonix at SemEval-2020 Task 5
T2 - 14th International Workshops on Semantic Evaluation, SemEval 2020
AU - Babvey, Pouria
AU - Borrelli, Dario
AU - Zhao, Yutong
AU - Lipizzi, Carlo
N1 - Publisher Copyright:
© 2020 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings. All rights reserved.
PY - 2020
Y1 - 2020
N2 - This paper presents the deep-learning model that is submitted to the SemEval-2020 Task 5 competition: “Detecting Counterfactuals”. We participated in both Subtask1 and Subtask2. The model proposed in this paper ranked 2nd in Subtask2: “Detecting antecedent and consequence”. Our model approaches the task as a sequence labeling. The architecture is built on top of BERT; and a multi-head attention layer with label masking is used to benefit from the mutual information between nearby labels. Also, for prediction, a multi-stage algorithm is used in which the model finalize some predictions with higher certainty in each step and use them in the following. Our results show that masking the labels not only is an efficient regularization method but also improves the accuracy of the model compared with other alternatives like CRF. Label masking can be used as a regularization method in sequence labeling. Also, it improves the performance of the model by learning the specific patterns in the target variable.
AB - This paper presents the deep-learning model that is submitted to the SemEval-2020 Task 5 competition: “Detecting Counterfactuals”. We participated in both Subtask1 and Subtask2. The model proposed in this paper ranked 2nd in Subtask2: “Detecting antecedent and consequence”. Our model approaches the task as a sequence labeling. The architecture is built on top of BERT; and a multi-head attention layer with label masking is used to benefit from the mutual information between nearby labels. Also, for prediction, a multi-stage algorithm is used in which the model finalize some predictions with higher certainty in each step and use them in the following. Our results show that masking the labels not only is an efficient regularization method but also improves the accuracy of the model compared with other alternatives like CRF. Label masking can be used as a regularization method in sequence labeling. Also, it improves the performance of the model by learning the specific patterns in the target variable.
UR - http://www.scopus.com/inward/record.url?scp=85103923027&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85103923027&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85103923027
T3 - 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings
SP - 677
EP - 682
BT - 14th International Workshops on Semantic Evaluation, SemEval 2020 - co-located 28th International Conference on Computational Linguistics, COLING 2020, Proceedings
A2 - Herbelot, Aurelie
A2 - Zhu, Xiaodan
A2 - Palmer, Alexis
A2 - Schneider, Nathan
A2 - May, Jonathan
A2 - Shutova, Ekaterina
Y2 - 12 December 2020 through 13 December 2020
ER -