Abstract

While deep neural networks (DNNs) have had a revolutionary impact on many sub-fields of AI over the last few years, we are also witnessing an explosion in DNN reliability and security issues. To address these problems, we propose a new DNN testing algorithm, called the Constrained Gradient Descent (CGD) method, and an implementation we call CGDTest. The CGD method takes as input a DNN model M (as is, and not a symbolic formulation of it), a logical property or set of constraints φ (e.g., a specification of "closeness" between a given input and a class of attack vectors), a label y, and  outputs an instance x such that argmax(M(x)) = y and x satisfies the constraints φ (e.g., x is an adversarial input to M). Our CGD algorithm is best viewed as a gradient-descent (GD) optimization method, with the twist that the user can also specify logical properties that characterize a specific type of input that they want. The loss function of the CGD method takes into account these constraints, in addition to the typical loss, to efficiently find such inputs with specific properties. We empirically compare CGDTest against 9 different state-of-the-art methods over a rich set of DNNs with millions of parameters, and show that CGDTest outperforms them in terms of scalability and efficacy.
 

Video Recording