Closed 17764591637 closed 5 years ago
Hi and thanks for the interest.
To train the model, you should minimize the LSALoss in novelty-detection/models/loss_functions/lsaloss.py
You should do something like
opt.zero_grad()
x_r, z, z_dist = model(x)
self.loss(x, x_r, z, z_dist).backward()
opt.step()
where x
is a normal training example and opt
is your favorite torch optimizer.
Hope this helps, D
Thank you very much for your reply, I am very interested in the self-encoding method you proposed.
Good morning, I am bothering you again. I trained the network last night. The parameters are as follows: Data: MNIST, BATCHSIZE: 64, Optimization: Adam, Epoch: 400. After 400 training sessions, the loss dropped from 500 to 167. I would like to ask how many epochs can train the model or how much loss needs to be dropped to terminate the iteration?thanks!
Hi,
In our experiment we trained it for 200 epochs. Alternatively, you can try to monitor the reconstruction error on a validation set to pick your checkpoint.
Best, D
Hi, I am replicating your experiment's result. Could you tell me the parameters of your Adam optimizer? Another question is about to monitor the reconstruction error on a validation set to pick checkpoint, why not monitor the value of novelty score?
Thanks! Best, Gin
I find that there is no training code here. Can you provide training code? thx very much!