HuiZhang0812 / DiffusionAD

148 stars 16 forks source link

我要用visa dataset 除了改path之外 還要改哪邊嗎 #31

Closed henrychou1233 closed 8 months ago

henrychou1233 commented 8 months ago

{ "img_size": [256,256], "Batch_Size": 16, "EPOCHS": 3000, "T": 1000, "base_channels": 128, "beta_schedule": "linear", "loss_type": "l2", "diffusion_lr": 1e-4, "seg_lr": 1e-5, "random_slice": true, "weight_decay": 0.0, "save_imgs":true, "save_vids":false, "dropout":0, "attention_resolutions":"32,16,8", "num_heads":4, "num_head_channels":-1, "noise_fn":"gauss", "channels":3, "mvtec_root_path":"/home/anywhere3090l/Desktop/henry/jjmvtec/mvtec", "visa_root_path":"/home/anywhere3090l/Desktop/henry/jjmvtec/VisA_dataset", "dagm_root_path":"datasets/dagm", "mpdd_root_path":"datasets/mpdd", "anomaly_source_path":"datasets/DTD", "noisier_t_range":600, "less_t_range":300, "condition_w":1, "eval_normal_t":200, "eval_noisier_t":400, "output_path":"outputs"

}

HuiZhang0812 commented 8 months ago

You also need to ensure that the foreground is placed in the correct folder, download the DTD dataset and modify the "anomaly_source_path".

henrychou1233 commented 8 months ago

請問都是jpg檔嗎 我看有些是png檔 我如何切換我要用哪個數據集 我只看到要填寫很多路徑而已 像我這樣寫可以ㄇ { "img_size": [256,256], "Batch_Size": 16, "EPOCHS": 3000, "T": 1000, "base_channels": 128, "beta_schedule": "linear", "loss_type": "l2", "diffusion_lr": 1e-4, "seg_lr": 1e-5, "random_slice": true, "weight_decay": 0.0, "save_imgs":true, "save_vids":false, "dropout":0, "attention_resolutions":"32,16,8", "num_heads":4, "num_head_channels":-1, "noise_fn":"gauss", "channels":3, "visa_root_path":"/home/anywhere3090l/Desktop/henry/DiffusionAD-main/VisA1", "anomaly_source_path":"/home/anywhere3090l/Desktop/henry/DiffusionAD-main/dtd", "noisier_t_range":600, "less_t_range":300, "condition_w":1, "eval_normal_t":200, "eval_noisier_t":400, "output_path":"outputs"

} 其他包都沒變

HuiZhang0812 commented 8 months ago

Yes, we have defined dataloders for different datasets in dataset_beta_thresh.py.

henrychou1233 commented 8 months ago

那我要怎麼改呢

On Tue, Jan 30, 2024, 15:22 Hui Zhang @.***> wrote:

Yes, we have defined dataloders for different datasets in dataset_beta_thresh.py.

— Reply to this email directly, view it on GitHub https://github.com/HuiZhang0812/DiffusionAD/issues/31#issuecomment-1916220687, or unsubscribe https://github.com/notifications/unsubscribe-auth/BA6U57X4ATSFLTTXOAC4FSTYRCNURAVCNFSM6AAAAABCPAMXV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJWGIZDANRYG4 . You are receiving this because you authored the thread.Message ID: @.***>

HuiZhang0812 commented 8 months ago

Please see the "current_classes" in train.py.

henrychou1233 commented 8 months ago

(base) @.***:~/Desktop/henry/DiffusionAD-main$ python train.py /home/anywhere3090l/Desktop/henry/DiffusionAD-main/train.py:20: DeprecationWarning: Pyarrow will become a required dependency of pandas in the next major release of pandas (pandas 3.0), (to allow more performant data types, such as the Arrow string type, and better interoperability with other libraries) but was not found to be installed on your system. If this would cause problems for you, please provide us feedback at https://github.com/pandas-dev/pandas/issues/54466

import pandas as pd class candle /home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle args1.json defaultdict(<class 'str'>, {'img_size': [256, 256], 'Batch_Size': 16, 'EPOCHS': 3000, 'T': 1000, 'base_channels': 128, 'beta_schedule': 'linear', 'loss_type': 'l2', 'diffusion_lr': 0.0001, 'seg_lr': 1e-05, 'random_slice': True, 'weight_decay': 0.0, 'save_imgs': True, 'save_vids': False, 'dropout': 0, 'attention_resolutions': '32,16,8', 'num_heads': 4, 'num_head_channels': -1, 'noise_fn': 'gauss', 'channels': 3, 'visa_root_path': '/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa', 'anomaly_source_path': '/home/anywhere3090l/Desktop/henry/DiffusionAD-main/dtd', 'noisier_t_range': 600, 'less_t_range': 300, 'condition_w': 1, 'eval_normal_t': 200, 'eval_noisier_t': 400, 'output_path': 'outputs', 'argnum': '1'}) 0%| | 0/56 [00:00<?, ?it/s][ @.*** global loadsave.cpp:248 findDecoder imread('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0907.JPG'): can't open/read file: check file path/integrity [ @. global loadsave.cpp:248 findDecoder imread_('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0223.JPG'): can't open/read file: check file path/integrity [ @. global loadsave.cpp:248 findDecoder imread('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0724.JPG'): can't open/read file: check file path/integrity [ @.*** global loadsave.cpp:248 findDecoder imread('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0000.JPG'): can't open/read file: check file path/integrity [ @. global loadsave.cpp:248 findDecoder imread_('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0372.JPG'): can't open/read file: check file path/integrity [ @. global loadsave.cpp:248 findDecoder imread('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0247.JPG'): can't open/read file: check file path/integrity [ @.*** global loadsave.cpp:248 findDecoder imread('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0526.JPG'): can't open/read file: check file path/integrity 0%| | 0/56 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/train.py", line 337, in main() File "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/train.py", line 332, in main train(training_dataset_loader, test_loader, args, data_len,sub_class,class_type,device ) File "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/train.py", line 111, in train for i, sample in enumerate(tbar): File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/tqdm/std.py", line 1182, in iter for obj in iterable: File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 630, in next data = self._next_data() ^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data return self._process_data(data) ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data data.reraise() File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/_utils.py", line 694, in reraise raise exception cv2.error: Caught error in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) ^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index]


  File
"/home/anywhere3090l/Desktop/henry/DiffusionAD-main/data/dataset_beta_thresh.py",
line 524, in __getitem__
    thresh = cv2.resize(thresh,dsize=(self.resize_shape[1],
self.resize_shape[0]))

 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
cv2.error: OpenCV(4.8.1) /io/opencv/modules/imgproc/src/resize.cpp:4062:
error: (-215:Assertion failed) !ssize.empty() in function 'resize'

[ ***@***.*** global loadsave.cpp:248 findDecoder
imread_('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0321.JPG'):
can't open/read file: check file path/integrity
[ ***@***.*** global loadsave.cpp:248 findDecoder
imread_('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0470.JPG'):
can't open/read file: check file path/integrity
[ ***@***.*** global loadsave.cpp:248 findDecoder
imread_('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0098.JPG'):
can't open/read file: check file path/integrity
[ ***@***.*** global loadsave.cpp:248 findDecoder
imread_('/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0645.JPG'):
can't open/read file: check file path/integrity

我有錯誤
我已經從current那邊改成visa了
謝謝您

On Tue, Jan 30, 2024 at 4:52 PM Hui Zhang ***@***.***> wrote:

> Please see the "current_classes" in train.py.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/HuiZhang0812/DiffusionAD/issues/31#issuecomment-1916350375>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BA6U57T3R2BBGSAJBW4TZ4LYRCYEBAVCNFSM6AAAAABCPAMXV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJWGM2TAMZXGU>
> .
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
HuiZhang0812 commented 8 months ago

Have you downloaded the foreground? For more information, please see README.md.

henrychou1233 commented 8 months ago

[image: image.png] 我下載了

On Tue, Jan 30, 2024 at 6:34 PM Hui Zhang @.***> wrote:

Have you downloaded the foreground? For more information, please see README.md.

— Reply to this email directly, view it on GitHub https://github.com/HuiZhang0812/DiffusionAD/issues/31#issuecomment-1916547605, or unsubscribe https://github.com/notifications/unsubscribe-auth/BA6U57WEVOTZIT7HZXKXRLTYRDECLAVCNFSM6AAAAABCPAMXV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJWGU2DONRQGU . You are receiving this because you authored the thread.Message ID: @.***>

HuiZhang0812 commented 8 months ago

It seems like that "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0645.JPG" doesn't exsit.

henrychou1233 commented 8 months ago

我用3090 64g 這樣夠嗎 他一直說我容量不夠 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 23.48 GiB of which 58.38 MiB is free. Including non-PyTorch memory, this process has 23.38 GiB memory in use. Of the allocated memory 22.98 GiB is allocated by PyTorch, and 58.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

On Tue, Jan 30, 2024 at 6:45 PM Hui Zhang @.***> wrote:

It seems like that "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0645.JPG" doesn't exsit.

— Reply to this email directly, view it on GitHub https://github.com/HuiZhang0812/DiffusionAD/issues/31#issuecomment-1916566923, or unsubscribe https://github.com/notifications/unsubscribe-auth/BA6U57WG6SYR2G5GLIZKTYDYRDFLZAVCNFSM6AAAAABCPAMXV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJWGU3DMOJSGM . You are receiving this because you authored the thread.Message ID: @.***>

henrychou1233 commented 8 months ago

(base) @.***:~/Desktop/henry/DiffusionAD-main$ python eval.py /home/anywhere3090l/Desktop/henry/DiffusionAD-main/eval.py:23: DeprecationWarning: Pyarrow will become a required dependency of pandas in the next major release of pandas (pandas 3.0), (to allow more performant data types, such as the Arrow string type, and better interoperability with other libraries) but was not found to be installed on your system. If this would cause problems for you, please provide us feedback at https://github.com/pandas-dev/pandas/issues/54466

import pandas as pd checkpoint outputs/model/diff-params-ARGS=1/carpet/params-best.pt Traceback (most recent call last): File "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/eval.py", line 438, in main() File "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/eval.py", line 366, in main args, output = load_parameters(device,sub_class,checkpoint_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/eval.py", line 136, in load_parameters output = load_checkpoint(param[4:-5], device,sub_class,checkpoint_type,args)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/eval.py", line 123, in load_checkpoint loaded_model = torch.load(ck_path, map_location=device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/serialization.py", line 986, in load with _open_file_like(f, 'rb') as opened_file: ^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/serialization.py", line 435, in _open_file_like return _open_file(name_or_buffer, mode) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/anywhere3090l/miniconda3/lib/python3.11/site-packages/torch/serialization.py", line 416, in init super().init(open(name, mode)) ^^^^^^^^^^^^^^^^ FileNotFoundError: [Errno 2] No such file or directory: 'outputs/model/diff-params-ARGS=1/carpet/params-best.pt'

我是用visa 但最後跑出來這個

On Tue, Jan 30, 2024 at 8:50 PM 周亨昆 @.***> wrote:

我用3090 64g 這樣夠嗎 他一直說我容量不夠 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB. GPU 0 has a total capacty of 23.48 GiB of which 58.38 MiB is free. Including non-PyTorch memory, this process has 23.38 GiB memory in use. Of the allocated memory 22.98 GiB is allocated by PyTorch, and 58.60 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

On Tue, Jan 30, 2024 at 6:45 PM Hui Zhang @.***> wrote:

It seems like that "/home/anywhere3090l/Desktop/henry/DiffusionAD-main/visa/candle/DISthresh/good/0645.JPG" doesn't exsit.

— Reply to this email directly, view it on GitHub https://github.com/HuiZhang0812/DiffusionAD/issues/31#issuecomment-1916566923, or unsubscribe https://github.com/notifications/unsubscribe-auth/BA6U57WG6SYR2G5GLIZKTYDYRDFLZAVCNFSM6AAAAABCPAMXV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJWGU3DMOJSGM . You are receiving this because you authored the thread.Message ID: @.***>

HuiZhang0812 commented 8 months ago

If the GPU memory is not enough, please reduce the batch size. You also need to specify "current_classes" when running eval.py.