dxli94 / WLASL

WACV 2020 "Word-level Deep Sign Language Recognition from Video: A New Large-scale Dataset and Methods Comparison"
https://dxli94.github.io/WLASL/
838 stars 111 forks source link

About pose_per_individual_videos file #51

Closed XiongTLu closed 2 years ago

XiongTLu commented 2 years ago

Hi, About pose_per_individual_videos file, I download it from public link, and find it just own 4174 projects. Then when I was training train_tgcn.py, I got a file error FileNotFoundError: [Errno 2] No such file or directory: '\\32337\\image_00018_keypoints.json'.I looked into the pose_per_individual file. I found no 32337 directory in the pose_per_individual_videos. How to resolve this problem. I hope anyone can help me. Thank you a lot.

dxli94 commented 2 years ago

Can you make sure your download is complete and also unzipped without issues? Thanks.

XiongTLu commented 2 years ago

Yes, I'm sure. I download it from public provide link in the pose-TGCN part of readme.md. And I unzip it into WLASL/data. But now, I have a new problem, the error show that

`Traceback (most recent call last):
  File "D:\WLASLtest\code\TGCN\train_tgcn.py", line 124, in <module>
    run(split_file=split_file, configs=configs, pose_data_root=pose_data_root)
  File "D:\WLASLtest\code\TGCN\train_tgcn.py", line 64, in run
    train_losses, train_scores, train_gts, train_preds = train(log_interval, model,
  File "D:\WLASLtest\code\TGCN\train_utils.py", line 17, in train
    for batch_idx, data in enumerate(train_loader):
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
    data = self._next_data()
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "D:\WLASLtest\code\TGCN\sign_dataset.py", line 137, in __getitem__
    x = self._load_poses(video_id, frame_start, frame_end, self.sample_strategy, self.num_samples)
  File "D:\WLASLtest\code\TGCN\sign_dataset.py", line 195, in _load_poses
    pose = read_pose_file(pose_path)
  File "D:\WLASLtest\code\TGCN\sign_dataset.py", line 36, in read_pose_file
    content = json.load(open(filepath))["people"][0]
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\WLASLtest\\data\\pose_per_individual_videos\\11013\\image_00017_keypoints.json'`

I think above error in line37 of sign_dataset.py. I tried other python file in this project at same folder, it can catch IndexError. And I also tried add

 `except FileNotFoundError:
        return None` 

under

`except FileNotFoundError:
        return None`

But I got another error show that

`C:\Users\vipuser\.conda\envs\pytorch\python.exe D:/WLASLtest/code/TGCN/train_tgcn.py
start training.
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00001_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00002_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00003_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00004_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00005_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00006_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00007_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00008_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00009_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00010_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00011_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00012_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00013_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00014_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00015_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00016_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00017_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00018_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00019_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00020_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00021_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00022_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00023_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00024_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00025_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00026_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00027_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00028_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00029_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00030_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00031_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00032_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00033_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00034_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00035_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00036_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00037_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00038_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00039_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00040_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00041_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00042_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00043_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00044_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00045_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00046_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00047_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00048_keypoints.json
D:\WLASLtest\data\pose_per_individual_videos\20137\image_00049_keypoints.json
Traceback (most recent call last):
  File "D:\WLASLtest\code\TGCN\train_tgcn.py", line 124, in <module>
    run(split_file=split_file, configs=configs, pose_data_root=pose_data_root)
  File "D:\WLASLtest\code\TGCN\train_tgcn.py", line 64, in run
    train_losses, train_scores, train_gts, train_preds = train(log_interval, model,
  File "D:\WLASLtest\code\TGCN\train_utils.py", line 17, in train
    for batch_idx, data in enumerate(train_loader):
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 681, in __next__
    data = self._next_data()
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\dataloader.py", line 721, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "C:\Users\vipuser\.conda\envs\pytorch\lib\site-packages\torch\utils\data\_utils\fetch.py", line 49, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "D:\WLASLtest\code\TGCN\sign_dataset.py", line 137, in __getitem__
    x = self._load_poses(video_id, frame_start, frame_end, self.sample_strategy, self.num_samples)
  File "D:\WLASLtest\code\TGCN\sign_dataset.py", line 213, in _load_poses
    last_pose = poses[-1]
IndexError: list index out of range

Process finished with exit code 1`

Still can't catch IndexError. And I find that there is no section named 20137 in the pose_per_individual_videos file. How to solve this? Thank you a lot.

XiongTLu commented 2 years ago

I found my big mistake. I re-downloaded the pose_per_individual_videos file and then I found 21096 folders in the newly downloaded file and I will re-train the model using the new file. Thank you for your help.

dxli94 commented 2 years ago

Could you try https://drive.google.com/drive/folders/1rnKr_PpDOHBL01de4NhdduM-JK51VHfF?

I've double checked and this should contain the full list of poses needed.

Thanks.

dxli94 commented 2 years ago

I found my big mistake. I re-downloaded the pose_per_individual_videos file and then I found 21096 folders in the newly downloaded file and I will re-train the model using the new file. Thank you for your help.

Glad you've figured it out. Closed.