Open mdzzmdzzmdzz opened 5 years ago
@mdzzmdzzmdzz sorry for being so late, I would still like to provide an answer. The Jacobian matrix is a matrix representation of the partial derivatives of a multi-variable function. In the context of machine learning, the Jacobian matrix can be used to compute the gradient of a model's output with respect to its inputs, which is important for various optimization and attack algorithms.
In the case of image classification, the Jacobian matrix is a mapping between the input image and the gradient of the class scores with respect to the input image. The gradient of the class scores can be used to perform a black-box attack by iteratively modifying the input image to increase the likelihood of a specific target class.
To compute the Jacobian matrix, one would need to perform forward and backward passes through the target model, computing the gradient of the output class scores with respect to the input image. This can be achieved using automatic differentiation techniques, which allow for the computation of gradients without explicit calculation.
Once the Jacobian matrix is computed, it can be used to perform various attacks, such as adversarial examples generation and model inversion. In general, the Jacobian matrix is a powerful tool for understanding the behavior of a machine learning model and can be used to perform a wide range of analysis and attack tasks.
The jsma.py script does not compute the Jacobian matrix. It implements a Jacobian-based Saliency Map attack (JSMA) on a convolutional neural network (CNN) model. The JSMA algorithm computes the difference between the gradient of the model's output with respect to the input (dy_dx) and the gradient of the target label with respect to the input (dt_dx), and updates the input in the direction that maximizes this difference (do_dx). This is done iteratively, with each iteration causing the model's output to become more and more similar to the target label, until the target label is obtained or the maximum number of iterations is reached. The algorithm is used to create adversarial examples, or inputs that can fool the model into giving the wrong output. The Jacobian matrix is the matrix representation of all first-order partial derivatives of a vector-valued function, and it could be used to analyze the behavior of the model under small changes in the input. In this case, the Jacobian matrix is not explicitly computed, you can compute with something like the following:
import sympy
def jacobian(func, x):
n = x.shape[0]
m = x.shape[0]
J = sympy.zeros(m, n)
fx = func(*x)
for i, fxi in enumerate(fx):
for j, xj in enumerate(x):
J[i, j] = sympy.diff(fxi, xj)
return J
Could you please tell me the detail of compute Jacodian matrix in Practical Black-Box Attacks against Machine Learning. Thank you!