hulianyuyy / CorrNet

Continuous Sign Language Recognition with Correlation Network (CVPR 2023)
84 stars 14 forks source link

IndexError #10

Closed NaNBridge closed 11 months ago

NaNBridge commented 11 months ago

Hi ! I tried to run python main.py --device 0 --load-weights /weitghts/dev_18.90_PHOENIX14-T.pt --phase test , but I got an IndexError. The detailed error message is as follows:

Original Traceback (most recent call last):
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/nan/project/CSLR/CorrNet/dataset/dataloader_video.py", line 50, in __getitem__
    input_data, label = self.normalize(input_data, label)
  File "/home/nan/project/CSLR/CorrNet/dataset/dataloader_video.py", line 87, in normalize
    video, label = self.data_aug(video, label, file_id)
  File "/home/nan/project/CSLR/CorrNet/utils/video_augmentation.py", line 24, in __call__
    image = t(image)
  File "/home/nan/project/CSLR/CorrNet/utils/video_augmentation.py", line 157, in __call__
    im_h, im_w, im_c = clip[0].shape
IndexError: list index out of range

I'm sure my dataset path is correct. The complete error message is as follows:

Traceback (most recent call last):
  File "/home/nan/project/CSLR/CorrNet/main.py", line 256, in <module>
    processor.start()
  File "/home/nan/project/CSLR/CorrNet/main.py", line 98, in start
    dev_wer = seq_eval(self.arg, self.data_loader["dev"], self.model, self.device,
  File "/home/nan/project/CSLR/CorrNet/seq_scripts.py", line 58, in seq_eval
    for batch_idx, data in enumerate(tqdm(loader)):
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__
    for obj in iterable:
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
    data = self._next_data()
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
    return self._process_data(data)
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
    data.reraise()
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/_utils.py", line 425, in reraise
    raise self.exc_type(msg)
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
    data = fetcher.fetch(index)
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/nan/anaconda3/envs/SLR/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/nan/project/CSLR/CorrNet/dataset/dataloader_video.py", line 50, in __getitem__
    input_data, label = self.normalize(input_data, label)
  File "/home/nan/project/CSLR/CorrNet/dataset/dataloader_video.py", line 87, in normalize
    video, label = self.data_aug(video, label, file_id)
  File "/home/nan/project/CSLR/CorrNet/utils/video_augmentation.py", line 24, in __call__
    image = t(image)
  File "/home/nan/project/CSLR/CorrNet/utils/video_augmentation.py", line 157, in __call__
    im_h, im_w, im_c = clip[0].shape
IndexError: list index out of range

All errors occur after the dataset has finished loading.

hulianyuyy commented 11 months ago

This is mostly caused by a wrong dataset link. You could check the dataset path. To ensure you correctly load the images, you can print the size of input_data in line 50 in dataset_loader.py to see whether it is a null tensor.

NaNBridge commented 11 months ago

I created a soft link to dataset PHOENIX14-T as README said. I don't konw if it is correct, in the file system, I can access the dataset normally by the soft link. But after printing I find that the length of input_data is 0 in line 50 in dataset_loader.py

hulianyuyy commented 11 months ago

This means you don't correctly create links to the dataset, and read null data. You may check the path to the dataset.

---Original--- From: @.> Date: Tue, Aug 1, 2023 10:06 AM To: @.>; Cc: @.**@.>; Subject: Re: [hulianyuyy/CorrNet] IndexError (Issue #10)

I created a soft link to dataset PHOENIX14-T as README said. I don't konw if it is correct, in the file system, I can access the dataset normally by a soft link. But after printing I find that the length of input_data is 0 in line 50 in dataset_loader.py

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

NaNBridge commented 11 months ago

Sorry, but I don't understand where the soft link went wrong, I followed the README to create the soft link. Below my dataset folder there exists a phoenix2014-T folder. Is the soft link wrong here?

hulianyuyy commented 11 months ago

You should create the soft link to the downloaded phoenix2014-T dataset. Under this folder, there are four subfolders "annotations, evaluation, features, models". You can check your path.

forsterseb commented 11 months ago

Hi, I had the exact same error. Through debugging I found that the video paths in /annotations/manual/*.csv did not match the existing paths in /features/fullFrame-210x260px/, because the directory '1' was missing.

I moved the files in the features folders into a new "1" subdirectory by executing following script for dev, train and test-folders:

import os
import shutil

dir_path = "/home/user/Documents/CorrNet/dataset/phoenix2014-T/features/fullFrame-210x260px/dev"
subfolders = [x for x in os.listdir(dir_path) if os.path.isdir(os.path.join(dir_path, x))]

for dir in subfolders:
    if "1" in os.listdir(os.path.join(dir_path, dir)):
        continue # 1 dir already exists, assume files are in correct dir
    files = [x for x in os.listdir(os.path.join(dir_path, dir)) if os.path.isfile(os.path.join(dir_path, dir, x))]
    goal_dir = os.path.join(dir_path, dir, "1")
    os.makedirs(goal_dir)
    for f in files:
        shutil.move(os.path.join(dir_path, dir, f), goal_dir)

Then I ran /preprocessing/dataset_preprocess-T.py --process-image --multiprocessing again and afterwards python main.py --device 0 --load-weights /weights/dev_18.90_PHOENIX14-T.pt --phase test worked without exceptions.

NaNBridge commented 11 months ago

Hi, I had the exact same error. Through debugging I found that the video paths in /annotations/manual/*.csv did not match the existing paths in /features/fullFrame-210x260px/, because the directory '1' was missing.

I moved the files in the features folders into a new "1" subdirectory by executing following script for dev, train and test-folders:

import os
import shutil

dir_path = "/home/user/Documents/CorrNet/dataset/phoenix2014-T/features/fullFrame-210x260px/dev"
subfolders = [x for x in os.listdir(dir_path) if os.path.isdir(os.path.join(dir_path, x))]

for dir in subfolders:
    if "1" in os.listdir(os.path.join(dir_path, dir)):
        continue # 1 dir already exists, assume files are in correct dir
    files = [x for x in os.listdir(os.path.join(dir_path, dir)) if os.path.isfile(os.path.join(dir_path, dir, x))]
    goal_dir = os.path.join(dir_path, dir, "1")
    os.makedirs(goal_dir)
    for f in files:
        shutil.move(os.path.join(dir_path, dir, f), goal_dir)

Then I ran /preprocessing/dataset_preprocess-T.py --process-image --multiprocessing again and afterwards python main.py --device 0 --load-weights /weights/dev_18.90_PHOENIX14-T.pt --phase test worked without exceptions.

Thank you, I solved the problem.