Closed boxbox2 closed 7 months ago
May I ask how many epochs did you train for?
May I ask how many epochs did you train for?
3000 train epoch I use the args1 argument you gave but training_dataset_loader i set batch_size = 6,nums_workers = 8 and I trained only the carpet categories in the mvtec dataset { "img_size": [256,256], "Batch_Size": 6, "EPOCHS": 3000, "T": 1000, "base_channels": 128, "beta_schedule": "linear", "loss_type": "l2", "diffusion_lr": 1e-4, "seg_lr": 1e-5, "random_slice": true, "weight_decay": 0.0, "save_imgs":true, "save_vids":false, "dropout":0, "attention_resolutions":"32,16,8", "num_heads":4, "num_head_channels":-1, "noise_fn":"gauss", "channels":3, "mvtec_root_path":"datasets/mvtec", "visa_root_path":"datasets/VisA/visa", "dagm_root_path":"datasets/dagm", "mpdd_root_path":"datasets/mpdd", "anomaly_source_path":"datasets/dtd", "noisier_t_range":600, "less_t_range":300, "condition_w":1, "eval_normal_t":200, "eval_noisier_t":400, "output_path":"outputs"
}
Which "checkpoint_type" are you using in eval.py?
Which "checkpoint_type" are you using in eval.py?
best
Try it with "checkpoint_type=last".
Try it with "checkpoint_type=last".
It's the same as the original
Have you recorded the loss curve?
Have you recorded the loss curve?
I haven't done anything special. I just ran the cloned code step by step. After running train.py, I ran eval.py
I will run your config to verify, thank you for your patience. By the way, I found that the results in #40 were as expected.
I also have the same problem, did you find any solution for that?
I also have the same problem, did you find any solution for that?
not yet
Thank you for your patience. We used the same configuration as yours, and our reconstruction and segmentation results were accurate.
May I ask if you are using the latest code? We recently released an update that fixes the issue of loss becoming "nan" when the batch size is small. Could you please try our latest code again?
Thank you for your patience. We used the same configuration as yours, and our reconstruction and segmentation results were accurate.
May I ask if you are using the latest code? We recently released an update that fixes the issue of loss becoming "nan" when the batch size is small. Could you please try our latest code again?
Thank you for your reply. I did indeed use skipping method to handle NaN values before. I will re-download your updated code for experimentation. Thanks again for your patient response.
Thank you for your patience. We used the same configuration as yours, and our reconstruction and segmentation results were accurate.
May I ask if you are using the latest code? We recently released an update that fixes the issue of loss becoming "nan" when the batch size is small. Could you please try our latest code again?
Thanks for your prompt reply. My problem is solved now. Since I did not use the dataset you provided, I had some mistakes in normalization that may lead to this problem.