TY - GEN
T1 - Quantitative Symbolic Robustness Verification for Quantized Neural Networks
AU - Downing, Mara
AU - Eiers, William
AU - DeLong, Erin
AU - Lodha, Anushka
AU - Ozawa Burns, Brian
AU - Kadron, Ismet Burak
AU - Bultan, Tevfik
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - With the growing prevalence of neural networks in computer systems, addressing their dependability has become a critical verification problem. In this paper, we focus on quantitative robustness verification, i.e., whether small changes to the input of a neural network can change its output. In particular, we perform quantitative symbolic analysis, where the goal is to identify how many inputs in a given neighborhood are misclassified. We target quantized neural networks, where all values in the neural network are rounded to fixed point values with limited precision. We employ symbolic execution and model counting to achieve quantitative verification of user-defined robustness properties where the verifier will report not only whether the robustness properties are satisfied for the given neural network, but also how many inputs violate them. This measure enables comparison of non-robust networks by assessing the level of robustness, which is not possible with existing quantized network verifiers. We implement and evaluate our approach as a tool called VerQ2. To the best of our knowledge, VerQ2 is the first quantitative verifier for quantized neural networks.
AB - With the growing prevalence of neural networks in computer systems, addressing their dependability has become a critical verification problem. In this paper, we focus on quantitative robustness verification, i.e., whether small changes to the input of a neural network can change its output. In particular, we perform quantitative symbolic analysis, where the goal is to identify how many inputs in a given neighborhood are misclassified. We target quantized neural networks, where all values in the neural network are rounded to fixed point values with limited precision. We employ symbolic execution and model counting to achieve quantitative verification of user-defined robustness properties where the verifier will report not only whether the robustness properties are satisfied for the given neural network, but also how many inputs violate them. This measure enables comparison of non-robust networks by assessing the level of robustness, which is not possible with existing quantized network verifiers. We implement and evaluate our approach as a tool called VerQ2. To the best of our knowledge, VerQ2 is the first quantitative verifier for quantized neural networks.
KW - Formal Verification
KW - Model Counting
KW - Neural Networks
UR - http://www.scopus.com/inward/record.url?scp=85211466280&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85211466280&partnerID=8YFLogxK
U2 - 10.1007/978-981-96-0617-7_8
DO - 10.1007/978-981-96-0617-7_8
M3 - Conference contribution
AN - SCOPUS:85211466280
SN - 9789819606160
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 125
EP - 145
BT - Formal Methods and Software Engineering - 25th International Conference on Formal Engineering Methods, ICFEM 2024, Proceedings
A2 - Ogata, Kazuhiro
A2 - Mery, Dominique
A2 - Sun, Meng
A2 - Liu, Shaoying
T2 - 25th International Conference on Formal Engineering Methods, ICFEM 2024
Y2 - 2 December 2024 through 6 December 2024
ER -