TY - JOUR
T1 - Hacking smart machines with smarter ones
T2 - How to extract meaningful data from machine learning classifiers
AU - Ateniese, Giuseppe
AU - Mancini, Luigi V.
AU - Spognardi, Angelo
AU - Villani, Antonio
AU - Vitali, Domenico
AU - Felici, Giovanni
N1 - Publisher Copyright:
Copyright © 2015 Inderscience Enterprises Ltd.
PY - 2015/9/1
Y1 - 2015/9/1
N2 - Machine-learning (ML) enables computers to learn how to recognise patterns, make unintended decisions, or react to a dynamic environment. The effectiveness of trained machines varies because of more suitable ML algorithms or because superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. In this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. Such information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights.
AB - Machine-learning (ML) enables computers to learn how to recognise patterns, make unintended decisions, or react to a dynamic environment. The effectiveness of trained machines varies because of more suitable ML algorithms or because superior training sets. Although ML algorithms are known and publicly released, training sets may not be reasonably ascertainable and, indeed, may be guarded as trade secrets. In this paper we focus our attention on ML classifiers and on the statistical information that can be unconsciously or maliciously revealed from them. We show that it is possible to infer unexpected but useful information from ML classifiers. In particular, we build a novel meta-classifier and train it to hack other classifiers, obtaining meaningful information about their training sets. Such information leakage can be exploited, for example, by a vendor to build more effective classifiers or to simply acquire trade secrets from a competitor's apparatus, potentially violating its intellectual property rights.
KW - Attacks methodology
KW - Information leakages
KW - Intellectual property rights
KW - Machine learning
KW - ML
KW - Security
KW - Trade secrets
KW - Unauthorised access
UR - http://www.scopus.com/inward/record.url?scp=84942309488&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84942309488&partnerID=8YFLogxK
U2 - 10.1504/IJSN.2015.071829
DO - 10.1504/IJSN.2015.071829
M3 - Article
AN - SCOPUS:84942309488
SN - 1747-8405
VL - 10
SP - 137
EP - 150
JO - International Journal of Security and Networks
JF - International Journal of Security and Networks
IS - 3
ER -