facebookresearch / odin

A simple and effective method for detecting out-of-distribution images in neural networks.
Other
526 stars 102 forks source link

About Input Processing code #14

Closed chagmgang closed 3 years ago

chagmgang commented 3 years ago

In the paper, the equation of input preprocessing( eq(2) in paper) is x˜ = x − εsign(−∇x log Syˆ(x; T)).

This equation is introduced in your repository, https://github.com/facebookresearch/odin/blob/master/code/calData.py#L72

tempInputs = torch.add(inputs.data,  -noiseMagnitude1, gradient)

However, the gradient = torch.ge(inputs.grad.data, 0) in your line 65 represents sign(∇x log Syˆ(x; T)) In fact, I think that your code is to implement x˜ = x − εsign(∇x log Syˆ(x; T)) not x˜ = x − εsign(-∇x log Syˆ(x; T))

What's your opinion?

YixuanLi commented 3 years ago

Hi Keumgang,

The gradient is by default calculated on the negative log-likelihood (which is loss= -∇x log Syˆ(x; T)). See https://github.com/facebookresearch/odin/blob/master/code/calData.py#L61

The noise is then subtracted of magnitude \epsilon. Overall the implementation corresponds to Equation (2) in the paper.

Let me know if you have any further questions!

Best, Sharon

On Mon, Jan 4, 2021 at 2:40 AM Keumgang Cha notifications@github.com wrote:

In the paper, the equation of input processing( eq(2) in paper) is x˜ = x − εsign(−∇x log Syˆ(x; T)).

This equation is introduced in your repository, https://github.com/facebookresearch/odin/blob/master/code/calData.py#L72

tempInputs = torch.add(inputs.data, -noiseMagnitude1, gradient)

However, the gradient = torch.ge(inputs.grad.data, 0) in your line 65 represents ∇x log Syˆ(x; T) In fact, I think that your code is to implement x˜ = x − εsign(∇x log Syˆ(x; T)) not x˜ = x − εsign(-∇x log Syˆ(x; T))

What's your opinion?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/facebookresearch/odin/issues/14, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUGVB6OSM4NZOZYIO6JFCDSYF5IHANCNFSM4VSWEWOA .

chagmgang commented 3 years ago

Thank you for answering.

Furthermore, in the paper, the perturbated input is decreasing the maximum softmax score. In literally,

small perturbations are added to decrease the softmax
score for the true label and force the neural network to make a wrong prediction.

However, with your code, softmax score increases not decreasing.

Below is code printing the softmax score of both with perturbation input and without perburbation input.

Also, in my case,

noiseMagnitude1 = 0.0014
temper = 1000
images, _ = data

inputs = Variable(images.cuda(CUDA_DEVICE), requires_grad = True)
outputs = net1(inputs)

### printing softmax
printing_indices = torch.argmax(outputs, axis=1)[0]
printing_softmax = F.softmax(outputs, dim=-1)[0][printing_indices]
print('original outputs: ', printing_softmax)

# Calculating the confidence of the output, no perturbation added here, no temperature scaling used
nnOutputs = outputs.data.cpu()
nnOutputs = nnOutputs.numpy()
nnOutputs = nnOutputs[0]
nnOutputs = nnOutputs - np.max(nnOutputs)
nnOutputs = np.exp(nnOutputs)/np.sum(np.exp(nnOutputs))
f1.write("{}, {}, {}\n".format(temper, noiseMagnitude1, np.max(nnOutputs)))

# Using temperature scaling
outputs = outputs / temper

# Calculating the perturbation we need to add, that is,
# the sign of gradient of cross entropy loss w.r.t. input
maxIndexTemp = np.argmax(nnOutputs)
labels = Variable(torch.LongTensor([maxIndexTemp]).cuda(CUDA_DEVICE))
loss = criterion(outputs, labels)
loss.backward()

# Normalizing the gradient to binary in {0, 1}
gradient =  torch.ge(inputs.grad.data, 0)
gradient = (gradient.float() - 0.5) * 2
# Normalizing the gradient to the same space of image
gradient[0][0] = (gradient[0][0])/(0.2023)
gradient[0][1] = (gradient[0][1])/(0.1994)
gradient[0][2] = (gradient[0][2])/(0.2010)
# Adding small perturbations to images
tempInputs = torch.add(inputs.data,  -noiseMagnitude1, gradient)
outputs = net1(Variable(tempInputs))

### printing softmax
printing_indices = torch.argmax(outputs, axis=1)[0]
printing_softmax = F.softmax(outputs, dim=-1)[0][printing_indices]
print('perturbated outputs: ', printing_softmax)
print('------------')

outputs = outputs / temper
# Calculating the confidence after adding perturbations
nnOutputs = outputs.data.cpu()
nnOutputs = nnOutputs.numpy()
nnOutputs = nnOutputs[0]
nnOutputs = nnOutputs - np.max(nnOutputs)
nnOutputs = np.exp(nnOutputs)/np.sum(np.exp(nnOutputs))
g1.write("{}, {}, {}\n".format(temper, noiseMagnitude1, np.max(nnOutputs)))

Below is the some result of printed softmax score

original outputs:  tensor(0.5309, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(0.9331, device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(1.0000, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(1., device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(0.9995, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(1.0000, device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(0.9942, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(0.9996, device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(0.7618, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(0.9521, device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(0.9991, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(1.0000, device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(0.9775, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(0.9994, device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(0.9907, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(0.9997, device='cuda:0', grad_fn=<SelectBackward>)
------------
original outputs:  tensor(0.7449, device='cuda:0', grad_fn=<SelectBackward>)
perturbated outputs:  tensor(0.9994, device='cuda:0', grad_fn=<SelectBackward>)
------------

I can't understand this phenomenon What do you think?

YixuanLi commented 3 years ago

Hi,

The perturbation is added with the goal of increasing the softmax score. We have the description and motivation below Equation (2). See below:

The method is inspired by the idea of adversarial examples (Goodfellow et al., 2015), where small perturbations are added to decrease the softmax score for the true label and force the neural network to make a wrong prediction. Here, our goal and setting is the opposite: we aim to increase the softmax score of any given input, without the need for a class label at all.

By subtracting the gradient, we boost the softmax score. The results you shared is consistent with the intended behavior. This is different from adversarial noise construction where the gradient is added to decrease the softmax score.

The effect and analysis of noise perturbation can be seen in Section 5.2 ( https://arxiv.org/pdf/1706.02690.pdf).

Hope it helps!

On Mon, Jan 4, 2021 at 1:29 PM Keumgang Cha notifications@github.com wrote:

Thank you for answering.

Furthermore, in the paper, the perturbated input is decreasing the maximum softmax score. In literally,

small perturbations are added to decrease the softmax score for the true label and force the neural network to make a wrong prediction.

However, with your code, logits increases not decreasing.

Below is printing code the logit of both with perturbation input and without perburbation input.

images, _ = data

inputs = Variable(images.cuda(CUDA_DEVICE), requires_grad = True) outputs = net1(inputs)

printing softmax

printing_indices = torch.argmax(outputs, axis=1)[0] printing_softmax = F.softmax(outputs, dim=-1)[0][printing_indices] print('original outputs: ', printing_softmax)

Calculating the confidence of the output, no perturbation added here, no temperature scaling used

nnOutputs = outputs.data.cpu() nnOutputs = nnOutputs.numpy() nnOutputs = nnOutputs[0] nnOutputs = nnOutputs - np.max(nnOutputs) nnOutputs = np.exp(nnOutputs)/np.sum(np.exp(nnOutputs)) f1.write("{}, {}, {}\n".format(temper, noiseMagnitude1, np.max(nnOutputs)))

Using temperature scaling

outputs = outputs / temper

Calculating the perturbation we need to add, that is,

the sign of gradient of cross entropy loss w.r.t. input

maxIndexTemp = np.argmax(nnOutputs) labels = Variable(torch.LongTensor([maxIndexTemp]).cuda(CUDA_DEVICE)) loss = criterion(outputs, labels) loss.backward()

Normalizing the gradient to binary in {0, 1}

gradient = torch.ge(inputs.grad.data, 0) gradient = (gradient.float() - 0.5) * 2

Normalizing the gradient to the same space of image

gradient[0][0] = (gradient[0][0])/(0.2023) gradient[0][1] = (gradient[0][1])/(0.1994) gradient[0][2] = (gradient[0][2])/(0.2010)

Adding small perturbations to images

tempInputs = torch.add(inputs.data, -noiseMagnitude1, gradient) outputs = net1(Variable(tempInputs))

printing softmax

printing_indices = torch.argmax(outputs, axis=1)[0] printing_softmax = F.softmax(outputs, dim=-1)[0][printing_indices] print('perturbated outputs: ', printing_softmax) print('------------')

outputs = outputs / temper

Calculating the confidence after adding perturbations

nnOutputs = outputs.data.cpu() nnOutputs = nnOutputs.numpy() nnOutputs = nnOutputs[0] nnOutputs = nnOutputs - np.max(nnOutputs) nnOutputs = np.exp(nnOutputs)/np.sum(np.exp(nnOutputs)) g1.write("{}, {}, {}\n".format(temper, noiseMagnitude1, np.max(nnOutputs)))

Below is the some result of printed softmax score

original outputs: tensor(0.5309, device='cuda:0', grad_fn=) perturbated outputs: tensor(0.9331, device='cuda:0', grad_fn=)

original outputs: tensor(1.0000, device='cuda:0', grad_fn=) perturbated outputs: tensor(1., device='cuda:0', grad_fn=)

original outputs: tensor(0.9995, device='cuda:0', grad_fn=) perturbated outputs: tensor(1.0000, device='cuda:0', grad_fn=)

original outputs: tensor(0.9942, device='cuda:0', grad_fn=) perturbated outputs: tensor(0.9996, device='cuda:0', grad_fn=)

original outputs: tensor(0.7618, device='cuda:0', grad_fn=) perturbated outputs: tensor(0.9521, device='cuda:0', grad_fn=)

original outputs: tensor(0.9991, device='cuda:0', grad_fn=) perturbated outputs: tensor(1.0000, device='cuda:0', grad_fn=)

original outputs: tensor(0.9775, device='cuda:0', grad_fn=) perturbated outputs: tensor(0.9994, device='cuda:0', grad_fn=)

original outputs: tensor(0.9907, device='cuda:0', grad_fn=) perturbated outputs: tensor(0.9997, device='cuda:0', grad_fn=)

original outputs: tensor(0.7449, device='cuda:0', grad_fn=) perturbated outputs: tensor(0.9994, device='cuda:0', grad_fn=)

I can't understand this phenomenon What do you think?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/facebookresearch/odin/issues/14#issuecomment-754169858, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABUGVBY37ADUMWR572767F3SYIJKZANCNFSM4VSWEWOA .

chagmgang commented 3 years ago

Thanks for answering. All questions are solved.