zxz267 / AvatarJLM

[ICCV 2023] Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
https://zxz267.github.io/AvatarJLM/
MIT License
43 stars 3 forks source link

IndexError: Caught IndexError in DataLoader worker process 0. #3

Closed Recialhot closed 6 months ago

Recialhot commented 10 months ago

the problem accur,how to solve it?thanks

zxz267 commented 10 months ago

the problem accur,how to solve it?thanks

Could you please provide more details?

Recialhot commented 10 months ago

[INFO] Update config {'opt': './options/opt_ajlm.json', 'task': 'AvatarJLM', 'protocol': '1', 'checkpoint': 'AvatarJLM-p1-100k.pth', 'vis': False}. number of GPUs is: 1 LogHandlers setup! -------------------------------number of test data is 0 Dataset [AMASS_Dataset - testdataset] is created. [Model Info] Use GT tracking signals replacement. [Model Info] Use position token. [Model Info] Use rotation token. [Model Info] Use input token. [Model Info] Total token number is 45. Training model [ModelAvatarJLM] is created. Loading model for G [AvatarJLM-p1-100k.pth] ... Traceback (most recent call last): File "D:\SEU\poseEstimate\AvatarJLM\test.py", line 154, in main(opt, args.vis) File "D:\SEU\poseEstimate\AvatarJLM\test.py", line 141, in main = evaluate(opt, logger, model, test_loader, save_animation=save_animation) File "D:\SEU\poseEstimate\AvatarJLM\test.py", line 18, in evaluate for index, test_data in enumerate(test_loader): File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data\dataloader.py", line 630, in next data = self._next_data() File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data\dataloader.py", line 1345, in _next_data return self._process_data(data) File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data\dataloader.py", line 1371, in _process_data data.reraise() File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch_utils.py", line 694, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data_utils\worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\SEU\poseEstimate\AvatarJLM\data\dataset_amass.py", line 48, in getitem filename = self.filename_list[idx] IndexError: list index out of range

zxz267 commented 10 months ago

[INFO] Update config {'opt': './options/opt_ajlm.json', 'task': 'AvatarJLM', 'protocol': '1', 'checkpoint': 'AvatarJLM-p1-100k.pth', 'vis': False}. number of GPUs is: 1 LogHandlers setup! -------------------------------number of test data is 0 Dataset [AMASS_Dataset - testdataset] is created. [Model Info] Use GT tracking signals replacement. [Model Info] Use position token. [Model Info] Use rotation token. [Model Info] Use input token. [Model Info] Total token number is 45. Training model [ModelAvatarJLM] is created. Loading model for G [AvatarJLM-p1-100k.pth] ... Traceback (most recent call last): File "D:\SEU\poseEstimate\AvatarJLM\test.py", line 154, in main(opt, args.vis) File "D:\SEU\poseEstimate\AvatarJLM\test.py", line 141, in main = evaluate(opt, logger, model, test_loader, save_animation=save_animation) File "D:\SEU\poseEstimate\AvatarJLM\test.py", line 18, in evaluate for index, test_data in enumerate(test_loader): File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data\dataloader.py", line 630, in next data = self._next_data() File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data\dataloader.py", line 1345, in _next_data return self._process_data(data) File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data\dataloader.py", line 1371, in _process_data data.reraise() File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch_utils.py", line 694, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data_utils\worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\Anaconda3\envs\avatarjlm\lib\site-packages\torch\utils\data_utils\fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index] File "D:\SEU\poseEstimate\AvatarJLM\data\dataset_amass.py", line 48, in getitem filename = self.filename_list[idx] IndexError: list index out of range

Could you please provide your data organization? It seems the data is not in the right position for loading.

Recialhot commented 10 months ago

hello, i can run test.py now, thanks. But the test results did not show any images. Is there any relevant part in the code? How to use it?

zxz267 commented 10 months ago

hello, i can run test.py now, thanks. But the test results did not show any images. Is there any relevant part in the code? How to use it?

You can add "--vis" to the command when you run "test.py" for visualization (after installing all the requirements for visualization).

Recialhot commented 10 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

zxz267 commented 10 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Recialhot commented 10 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

zxz267 commented 10 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

I am not personally familiar with this particular matter. You might find it helpful to refer to this for guidance.

Recialhot commented 10 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

I am not personally familiar with this particular matter. You might find it helpful to refer to this for guidance.

Hello, your excellent work is trained through 3 points, I now want to see the training and testing effect of 5 points, i need to add 2 points (two feet), where should i improve your code work? Thank you

zxz267 commented 10 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

I am not personally familiar with this particular matter. You might find it helpful to refer to this for guidance.

Hello, your excellent work is trained through 3 points, I now want to see the training and testing effect of 5 points, i need to add 2 points (two feet), where should i improve your code work? Thank you

Our network takes masked tracking signals as inputs. In our case, we only take 3 points as inputs and mask other joints. If you want to extend our method to 5 points, you can simply modify the masked joints here.

Recialhot commented 9 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

I am not personally familiar with this particular matter. You might find it helpful to refer to this for guidance.

Hello, your excellent work is trained through 3 points, I now want to see the training and testing effect of 5 points, i need to add 2 points (two feet), where should i improve your code work? Thank you

Our network takes masked tracking signals as inputs. In our case, we only take 3 points as inputs and mask other joints. If you want to extend our method to 5 points, you can simply modify the masked joints here.

I noticed that the pkl file processed by amass has a right-hand coordinate system (x, y, z), but isn't smpl a left-hand coordinate system (x, z, y)? thanks

zxz267 commented 9 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

I am not personally familiar with this particular matter. You might find it helpful to refer to this for guidance.

Hello, your excellent work is trained through 3 points, I now want to see the training and testing effect of 5 points, i need to add 2 points (two feet), where should i improve your code work? Thank you

Our network takes masked tracking signals as inputs. In our case, we only take 3 points as inputs and mask other joints. If you want to extend our method to 5 points, you can simply modify the masked joints here.

I noticed that the pkl file processed by amass has a right-hand coordinate system (x, y, z), but isn't smpl a left-hand coordinate system (x, z, y)? thanks

You are correct. AMASS data is indeed processed by converting the root pose to the (x, y, z) coordinate system, which differs from the SMPL model.

zxz267 commented 9 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

I am not personally familiar with this particular matter. You might find it helpful to refer to this for guidance.

Hello, your excellent work is trained through 3 points, I now want to see the training and testing effect of 5 points, i need to add 2 points (two feet), where should i improve your code work? Thank you

Our network takes masked tracking signals as inputs. In our case, we only take 3 points as inputs and mask other joints. If you want to extend our method to 5 points, you can simply modify the masked joints here.

I noticed that the pkl file processed by amass has a right-hand coordinate system (x, y, z), but isn't smpl a left-hand coordinate system (x, z, y)? thanks

You are correct. The AMASS data is indeed processed by converting it to the (x, y, z) coordinate system, which results in a different coordinate system from the SMPL model.

Recialhot commented 9 months ago

Hello, now after running the test.py, the avi file with serial number 0,10,20 will be generated in the results。How to generate video viewing while testing in real time? Or does the code only provide an avi file with a specified sequence number for later review after the end of the run?

Currently, the code is designed to generate visualizations only after completing the entire sequence prediction. To enable real-time viewing, you will likely need to implement specific modifications.

Now run the test file to get the avi file after the run, how to display the full effect in unity, have you done any related work?thank you

I am not personally familiar with this particular matter. You might find it helpful to refer to this for guidance.

Hello, your excellent work is trained through 3 points, I now want to see the training and testing effect of 5 points, i need to add 2 points (two feet), where should i improve your code work? Thank you

Our network takes masked tracking signals as inputs. In our case, we only take 3 points as inputs and mask other joints. If you want to extend our method to 5 points, you can simply modify the masked joints here.

I noticed that the pkl file processed by amass has a right-hand coordinate system (x, y, z), but isn't smpl a left-hand coordinate system (x, z, y)? thanks

You are correct. The AMASS data is indeed processed by converting it to the (x, y, z) coordinate system, which results in a different coordinate system from the SMPL model.

When I obtained the data of the Pico controller based on OpenVR, what is the specific position relative to the controller? Is it similar to a human palm or wrist? Thanks

left_controller_pose = get_device_pose(poses, openvr.TrackedControllerRole_LeftHand)

def get_device_pose(poses, device_index): pose = poses[device_index] return pose.mDeviceToAbsoluteTracking

Recialhot commented 9 months ago

您好,现在运行 test.py 后,结果中会生成序列号为 0,10,20 的 avi 文件。如何在实时测试时生成视频观看?或者代码是否仅提供具有指定序列号的 avi 文件,以便在运行结束后稍后查看?

目前,该代码设计为仅在完成整个序列预测后生成可视化效果。若要启用实时查看,可能需要实施特定的修改。

现在运行测试文件获取avi文件后运行,如何在unity中显示完整的效果,你有没有做过相关的工作?谢谢

我个人不熟悉这个具体问题。您可能会发现参考此内容以获取指导会很有帮助。

您好,你们优秀的作品是通过3点训练的,我现在想看看5点的训练和测试效果,我需要加2点(两尺),我应该在哪里改进你的代码工作?谢谢

我们的网络将屏蔽跟踪信号作为输入。在我们的例子中,我们只取 3 个点作为输入并屏蔽其他关节。如果要将我们的方法扩展到 5 个点,可以简单地在此处修改掩码关节。

我注意到 amass 处理的 pkl 文件有一个右手坐标系(x、y、z),但 smpl 不是左手坐标系(x、z、y)吗?谢谢

你是对的。AMASS 数据确实是通过将其转换为 (x, y, z) 坐标系来处理的,这会导致与 SMPL 模型不同的坐标系。

Hello, I would like to ask what you mean by (x,y,z) coordinate system? X to forward, y to left, z to up? Thanks. When I capture real data with (x,y,z) coordinates, I can't test it well with the model, so I'd like to ask you about your coordinate system settings

Recialhot commented 9 months ago

您好,现在运行 test.py 后,结果中会生成序列号为 0,10,20 的 avi 文件。如何在实时测试时生成视频观看?或者代码是否仅提供具有指定序列号的 avi 文件,以便在运行结束后稍后查看?

目前,该代码设计为仅在完成整个序列预测后生成可视化效果。若要启用实时查看,可能需要实施特定的修改。

现在运行测试文件获取avi文件后运行,如何在unity中显示完整的效果,你有没有做过相关的工作?谢谢

我个人不熟悉这个具体问题。您可能会发现参考此内容以获取指导会很有帮助。

您好,你们优秀的作品是通过3点训练的,我现在想看看5点的训练和测试效果,我需要加2点(两尺),我应该在哪里改进你的代码工作?谢谢

我们的网络将屏蔽跟踪信号作为输入。在我们的例子中,我们只取 3 个点作为输入并屏蔽其他关节。如果要将我们的方法扩展到 5 个点,可以简单地在此处修改掩码关节。

我注意到 amass 处理的 pkl 文件有一个右手坐标系(x、y、z),但 smpl 不是左手坐标系(x、z、y)吗?谢谢

你是对的。AMASS 数据确实是通过将其转换为 (x, y, z) 坐标系来处理的,这会导致与 SMPL 模型不同的坐标系。

Hello, I would like to ask what you mean by (x,y,z) coordinate system? X to forward, y to left, z to up? Thanks. When I capture real data with (x,y,z) coordinates, I can't test it well with the model, so I'd like to ask you about your coordinate system settings

when i use z to up, wherever x and y are facing, it will cause this: body_919

only when i use z to forward, x to left, y to up, it will cause this, but it not stand and Lifting feet is easy to fluctuate. when i see dataset_amass or your dataset_tracking, the data always z to up, but final result is right. my data z to up is not success. That's why? Thanks body_100