TY - JOUR
T1 - Risk-averse classification
AU - Vitt, Constantine Alexander
AU - Dentcheva, Darinka
AU - Xiong, Hui
N1 - Publisher Copyright:
© 2019, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2019
Y1 - 2019
N2 - We develop a new approach to solving classification problems, which is based on the theory of coherent measures of risk and risk sharing ideas. We introduce the notion of a risk-averse classifier and a family of risk-averse classification problems. We show that risk-averse classifiers are associated with minimal points of the possible classification errors, where the minimality is understood with respect to a suitable stochastic order. The new approach allows for measuring risk by distinct risk functional for each class. We analyze the structure of the new classification problem and establish its theoretical relation to known risk-neutral design problems. In particular, we show that the risk-sharing classification problem is equivalent to an implicitly defined optimization problem with unequal weights for each data point. Additionally, we derive a confidence interval for the total risk of a risk-averse classifier. We implement our methodology in a binary classification scenario on several different data sets. We formulate specific risk-averse support vector machines in order to demonstrate the proposed approach and carry out numerical comparison with classifiers which are obtained using the Huber loss function and other loss functions known in the literature.
AB - We develop a new approach to solving classification problems, which is based on the theory of coherent measures of risk and risk sharing ideas. We introduce the notion of a risk-averse classifier and a family of risk-averse classification problems. We show that risk-averse classifiers are associated with minimal points of the possible classification errors, where the minimality is understood with respect to a suitable stochastic order. The new approach allows for measuring risk by distinct risk functional for each class. We analyze the structure of the new classification problem and establish its theoretical relation to known risk-neutral design problems. In particular, we show that the risk-sharing classification problem is equivalent to an implicitly defined optimization problem with unequal weights for each data point. Additionally, we derive a confidence interval for the total risk of a risk-averse classifier. We implement our methodology in a binary classification scenario on several different data sets. We formulate specific risk-averse support vector machines in order to demonstrate the proposed approach and carry out numerical comparison with classifiers which are obtained using the Huber loss function and other loss functions known in the literature.
KW - Coherent measures of risk
KW - Machine learning
KW - Normalized classifiers
KW - Risk sharing
KW - Risk-aware classification
KW - Soft-margin classifier
KW - Support vector machines
UR - http://www.scopus.com/inward/record.url?scp=85070319756&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85070319756&partnerID=8YFLogxK
U2 - 10.1007/s10479-019-03344-6
DO - 10.1007/s10479-019-03344-6
M3 - Article
AN - SCOPUS:85070319756
SN - 0254-5330
JO - Annals of Operations Research
JF - Annals of Operations Research
ER -