JustinTebbe / Dynamic-noise-AD

MIT License
6 stars 0 forks source link

這樣數據應該有讀進去吧? 差最後一步了 #5

Open henrychou1233 opened 1 month ago

henrychou1233 commented 1 month ago

nycu@LAPTOP-TGV3MD7N:/mnt/d/download2/ss/d3ad$ python3 main.py The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache(). 0it [00:00, ?it/s] Num params: 281088004 Current device is cuda /home/nycu/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( diffusion_pytorch_model.safetensors: 100%|███████████████████████████████████████████| 335M/335M [03:31<00:00, 1.58MB/s] vae/config.json: 100%|█████████████████████████████████████████████████████████████████| 551/551 [00:00<00:00, 4.05MB/s] The config attributes {'scaling_factor': 0.18215} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file. Epoch 0 | Loss: 1.000379204750061 training time on 1 epochs is 0:10:48.312662

diffusion_pytorch_model.safetensors: 100%|███████████████████████████████████████████| 335M/335M [03:55<00:00, 1.42MB/s] config.json: 100%|█████████████████████████████████████████████████████████████████████| 547/547 [00:00<00:00, 6.07MB/s] Epoch 0 | Loss: 0.14154160022735596 bin edges: [16.48680453 19.81119619 23.13558784 26.4599795 29.78437116 33.10876281 36.43315447 39.75754612 43.08193778 46.40632944 49.73072109] histogram: [54 14 5 7 0 0 0 0 0 1] /home/nycu/.local/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( /mnt/d/download2/ss/d3ad/test.py:273: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:261.) test_trajectoy_steps = torch.Tensor([step_size]).type(torch.int64).to(config.model.device)[0] Traceback (most recent call last): File "/mnt/d/download2/ss/d3ad/main.py", line 158, in execute_main_test() File "/mnt/d/download2/ss/d3ad/main.py", line 155, in execute_main_test evaluate(args) File "/mnt/d/download2/ss/d3ad/main.py", line 120, in evaluate validate(unet, constants_dict, config) File "/mnt/d/download2/ss/d3ad/test.py", line 344, in validate threshold = metric(labels_list, predictions_normalized, heatmap_latent_list, GT_list, config) File "/mnt/d/download2/ss/d3ad/metrics.py", line 16, in metric pro = compute_pro(GT_list, anomaly_map_list, num_th = 200) File "/mnt/d/download2/ss/d3ad/metrics.py", line 116, in compute_pro df = pd.concat([df, pd.DataFrame({"pro": mean(pros), "fpr": fpr, "threshold": th}, index=[0])], ignore_index=True) File "/usr/lib/python3.10/statistics.py", line 328, in mean raise StatisticsError('mean requires at least one data point') statistics.StatisticsError: mean requires at least one data point

JustinTebbe commented 1 month ago

I'm not able to reproduce that error. Please check first if that error prevails when you just run the evaluation with python3 main.py --eval True. If the error persists, verify that the lists of ground truth masks (GT_list) and anomaly maps (anomaly_map_list), which the pro function takes as input, are not empty.