xuanqing94 / BayesianDefense

Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network
MIT License
62 stars 12 forks source link

KL loss #10

Closed starsky68 closed 4 years ago

starsky68 commented 4 years ago

Can KLLoss and re-parameters be customized?

xuanqing94 commented 4 years ago

If you meant to change the loss function, then yes it is definitely doable. What do you mean by re-parameters?

starsky68 commented 4 years ago

If you meant to change the loss function, then yes it is definitely doable. What do you mean by re-parameters?

If you meant to change the loss function, then yes it is definitely doable. What do you mean by re-parameters?

Does the meaning of reparameterization look like this? Reparametrization tricks : https://github.com/Harry24k/bayesian-neural-network-pytorch/blob/e6c6a858064319008912d3a91941a802769520ef/torchbnn/modules/linear.py#L86.

starsky68 commented 4 years ago

https://github.com/xuanqing94/BayesianDefense/blob/c4c0be9b258f40130b40d6a6e009c459666f2722/main_vi.py#L97.Excuse me, why divide by 100 in this calculation formula.

xuanqing94 commented 4 years ago

@starsky68

  1. Reparameterization tricks: Yes, this is the code.
  2. Divide by 100: because this is the coefficient of regularization term, we want to be smaller so I divide it by 100 (see the paper (Eq13 in page 5). If you want to write a general-purpose library, you need to pass it by arguments.
starsky68 commented 4 years ago

https://github.com/xuanqing94/BayesianDefense/blob/c4c0be9b258f40130b40d6a6e009c459666f2722/utils/loss.py#L16.Thank you for your answers. I have another question. In this formula, is there any difference between adding or subtracting klloss? Is there any difference? Or did I miss some details?

xuanqing94 commented 4 years ago

@starsky68 You should add the KL term, as shown in the paper (Eq 13 page 5).