zwx8981 / LIQE

[CVPR2023] Blind Image Quality Assessment via Vision-Language Correspondence: A Multitask Learning Perspective
MIT License
197 stars 11 forks source link

training process becomes unresponsive halfway through #4

Closed JennyVanessa closed 8 months ago

JennyVanessa commented 1 year ago

Has anyone encountered a similar issue where the training process becomes unresponsive halfway through, with no additional log entries being recorded, yet no error is thrown?

JennyVanessa commented 1 year ago

like: 1.7274575140626334e-06 (E:33, S:1 / 200) [Loss = 0.1450] (16.6 samples/sec; 0.964 sec/batch) (E:33, S:2 / 200) [Loss = 0.1431] (17.4 samples/sec; 0.918 sec/batch) (E:33, S:3 / 200) [Loss = 0.1425] (18.2 samples/sec; 0.878 sec/batch) (E:33, S:4 / 200) [Loss = 0.1426] (19.0 samples/sec; 0.843 sec/batch) (E:33, S:5 / 200) [Loss = 0.1429] (19.7 samples/sec; 0.811 sec/batch) (E:33, S:6 / 200) [Loss = 0.1420] (19.3 samples/sec; 0.831 sec/batch)

<stop logging, don't know why..>

zwx8981 commented 1 year ago

This is a bit strange. Does this problem occurs randomly or at a specific epoch (like E:33 as you show) ?

JennyVanessa commented 1 year ago

This is a bit strange. Does this problem occurs randomly or at a specific epoch (like E:33 as you show) ?

It is random, when I interrupt the program, it reached this point:

(E:34, S:1 / 200) [Loss = 0.1484] (4.4 samples/sec; 0.916 sec/batch)
(E:34, S:2 / 200) [Loss = 0.1454] (4.6 samples/sec; 0.870 sec/batch)
^CTraceback (most recent call last):
File "/home/user5/code/IAA/LIQE/train_unique_clip_weight.py", line 739, in
best_result, best_epoch, srcc_dict, scene_dict, type_dict, all_result = train(model, best_result, best_epoch, srcc_dict,
File "/home/user5/code/IAA/LIQE/train_unique_clip_weight.py", line 195, in train
sample_batched = next(loader)
File "/home/user5/anaconda3/envs/testenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in next
data = self._next_data()
File "/home/user5/anaconda3/envs/testenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1186, in _next_data
idx, data = self._get_data()
File "/home/user5/anaconda3/envs/testenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1142, in _get_data
success, data = self._try_get_data()
File "/home/user5/anaconda3/envs/testenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 990, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/home/user5/anaconda3/envs/testenv/lib/python3.8/queue.py", line 179, in get
self.not_empty.wait(remaining)
File "/home/user5/anaconda3/envs/testenv/lib/python3.8/threading.py", line 306, in wait
gotit = waiter.acquire(True, timeout)
KeyboardInterrupt

Process finished with exit code 130

It appears that the data loading thread is not receiving data and is stuck waiting.

GitHub-Ju commented 11 months ago

like: 1.7274575140626334e-06 (E:33, S:1 / 200) [Loss = 0.1450] (16.6 samples/sec; 0.964 sec/batch) (E:33, S:2 / 200) [Loss = 0.1431] (17.4 samples/sec; 0.918 sec/batch) (E:33, S:3 / 200) [Loss = 0.1425] (18.2 samples/sec; 0.878 sec/batch) (E:33, S:4 / 200) [Loss = 0.1426] (19.0 samples/sec; 0.843 sec/batch) (E:33, S:5 / 200) [Loss = 0.1429] (19.7 samples/sec; 0.811 sec/batch) (E:33, S:6 / 200) [Loss = 0.1420] (19.3 samples/sec; 0.831 sec/batch)

<stop logging, don't know why..>

Hello, I would like to ask how you solved this problem?

zwx8981 commented 10 months ago

@JennyVanessa Hi, you may try this. https://github.com/zwx8981/LIQE/issues/17#issuecomment-1891385565