Closed hyao1 closed 11 months ago
Hi,
Could you please merely remove the ".zip" extension from the file named "2000" and check if it works properly? Without unzipping the file.
thank you! it works for me. However i encountered another problem when i load checkpoints of feature_extractor in 36 line of ddad.py (feature_extractor = domain_adaptation(self.unet, self.config, fine_tune=False) ). There is the notice as follow:
I do not find a checkpoints of feature_extractor named feat{DA_chp} in my downloaded files. Whether the fine-tuned model is not released, and should I fine-tune a feature_extractor myself?
I do not find a checkpoints of feature_extractor named feat{DA_chp} in my downloaded files. Whether the fine-tuned model is not released, and should I fine-tune a feature_extractor myself?
I will release the checkpoints in the near future.
Prior to evaluation, it is necessary to fine-tune the feature extractor. To accomplish this, navigate to the config.yaml file and adjust the settings as outlined in the repository tables. Set DA_epochs and DA_chp based on the feature extractor epochs, and configure w with the corresponding value from the table.
After configuring the values in the config file, execute the following code to fine-tune the feature extractor. This process may take some time to complete:
Once the fine-tuning is done, you can proceed with model evaluation: python main.py --domain_adaptation True
Now you can evaluate the model. python main.py --eval True
If you wish to obtain information on misclassified samples and visualize reconstructions, you can customize the values of misclassifications and visualisation in the config file accordingly. Please let me know if you face any other problems.
I do not find a checkpoints of feature_extractor named feat{DA_chp} in my downloaded files. Whether the fine-tuned model is not released, and should I fine-tune a feature_extractor myself?
I will release the checkpoints in the near future.
Prior to evaluation, it is necessary to fine-tune the feature extractor. To accomplish this, navigate to the config.yaml file and adjust the settings as outlined in the repository tables. Set DA_epochs and DA_chp based on the feature extractor epochs, and configure w with the corresponding value from the table.
After configuring the values in the config file, execute the following code to fine-tune the feature extractor. This process may take some time to complete:
Once the fine-tuning is done, you can proceed with model evaluation: python main.py --domain_adaptation True
Now you can evaluate the model. python main.py --eval True
If you wish to obtain information on misclassified samples and visualize reconstructions, you can customize the values of misclassifications and visualisation in the config file accordingly. Please let me know if you face any other problems.
fine, thank you very much
Hi, i have fine-turn the feature extractor on hezelnut and screw. But the I-AUROC/P-AUROC is (96.4,99.3) on screw, and (99.8,98.2) on hazelnut. I set (load_chp=2000, DA_epochs=30, DA_chp=4, w=2 and w_DA=3) for screw and (load_chp=2000, DA_epochs=30, DA_chp=3, w=5 and w_DA=3) for hazelnut.
I set the corresponding parameters (w, and DA_chp) according to the readme, and set DA_epochs=30 and w_DA=3 on two categories. Is there any problem?
Apologies for the late response. I just cloned the code and conducted tests on both categories. Results are consistent with the reported ones. It's a bit challenging for me to pinpoint why you might be obtaining slightly different answers. Could you confirm that you're using the most recent version of our code?
To enhance usability, I plan to publish feature checkpoints over the weekend.
Also to decrease the fine-tuning time you can set the value of DA_epochs similar to DA_chp. It is not required to fine-tune for 30 epochs as it may be time-consuming.
Thank you very well. Sorry for not checking the issue for a long time. I certainty used most recent version. I think the only difference is batchsize(I set it as 12 because of limited GPU memory). I think it is the reason. Looking forward the checkpoints.
True, changing batch size will change the results. I have uploaded the MVTec checkpoints and will upload VisA checkpoints tomorrow.
Very nice work. But something went wrong when I loaded the provided checkpoint to evaluate and test the model.
Class: hazelnut w: 8 v: 1 load_chp: 2000 feature extractor: wide_resnet101_2 w_DA: 3 DLlambda: 0.1 config.model.test_trajectoy_steps=250 , config.data.test_batch_size=16 Detecting Anomalies... Traceback (most recent call last): File "D:\code\DDAD-main\main.py", line 96, in
detection(config)
File "D:\code\DDAD-main\main.py", line 36, in detection
checkpoint = torch.load(os.path.join(os.getcwd(), config.model.checkpoint_dir, config.data.category, str(config.model.load_chp)))
File "C:\Users\23871\anaconda3\envs\vicuna\lib\site-packages\torch\serialization.py", line 791, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Users\23871\anaconda3\envs\vicuna\lib\site-packages\torch\serialization.py", line 271, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Users\23871\anaconda3\envs\vicuna\lib\site-packages\torch\serialization.py", line 252, in init
super().init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'D:\code\DDAD-main\checkpoints/MVTec\hazelnut\2000'
I made sure the path to the file was correct. I noticed the downloaded checkpoint is a .zip fold, such as checkpoints/MVTec/hazelnut/2000.zip, is it right? I tried to extract the zip file, and it is also not right. how should load the checkpoint?
The checkpoint is like this: My python is 3.10 and pytorch is 2.0. And when I execute python main.py --eval True, there is not a --eval in args, so I change it into python main.py --detection True