kasvii / PMCE

[ICCV 2023] PyTorch Implementation of "Co-Evolution of Pose and Mesh for 3D Human Body Estimation from Video"
https://kasvii.github.io/PMCE
MIT License
145 stars 5 forks source link

Using multi-gpu to accelarate #8

Closed zhixuanli closed 7 months ago

zhixuanli commented 7 months ago

Hi authors,

It's me again. Happy New Year!

I'm just wondering if this copy of the codes supports using multi-GPU. I have set the GPU ids to 0,1 in the command file, which is python ./main/train.py --cfg ./config/train_mesh_h36m.yml --gpu 0,1. But still, only the id 0 GPU is used.

Is there any way to use two or more GPUs to accelerate the training since at present it will take about 20 hours for the training of the second stage for the h36m dataset?

Thanks again for all your support.

zhixuanli commented 7 months ago

BTW, could you please tell me about the training time of your experiments on a single 3090 GPU and h36m dataset as mentioned in the paper?

On my side, this setting will cost about 30 hours in total.

zhixuanli commented 7 months ago

Hi, happy Friday, and have a nice weekend.

Could you please take a look at this question when you have time? Thank you so much! :)

kasvii commented 7 months ago

BTW, could you please tell me about the training time of your experiments on a single 3090 GPU and h36m dataset as mentioned in the paper?

On my side, this setting will cost about 30 hours in total.

On a single 3090 GPU, the time overheads are as follows: mesh ● 3dpw: 30min/epoch, about half a day ● h36m: 1h25min/epoch, about a day and a half pose ● 3dpw: 1h40min/epoch ● h36m: 25min/epoch

kasvii commented 7 months ago

Hi authors,

It's me again. Happy New Year!

I'm just wondering if this copy of the codes supports using multi-GPU. I have set the GPU ids to 0,1 in the command file, which is python ./main/train.py --cfg ./config/train_mesh_h36m.yml --gpu 0,1. But still, only the id 0 GPU is used.

Is there any way to use two or more GPUs to accelerate the training since at present it will take about 20 hours for the training of the second stage for the h36m dataset?

Thanks again for all your support.

You could try to add 'self.model = nn.DataParallel(self.model)' in Line1, and Line2, then run the command with multi-GPUs (I have tried it and it could work).

zhixuanli commented 7 months ago

BTW, could you please tell me about the training time of your experiments on a single 3090 GPU and h36m dataset as mentioned in the paper? On my side, this setting will cost about 30 hours in total.

On a single 3090 GPU, the time overheads are as follows: mesh ● 3dpw: 30min/epoch, about half a day ● h36m: 1h25min/epoch, about a day and a half pose ● 3dpw: 1h40min/epoch ● h36m: 25min/epoch

The training time of yours and mine are almost the same. Thanks for sharing.

zhixuanli commented 7 months ago

Hi authors, It's me again. Happy New Year! I'm just wondering if this copy of the codes supports using multi-GPU. I have set the GPU ids to 0,1 in the command file, which is python ./main/train.py --cfg ./config/train_mesh_h36m.yml --gpu 0,1. But still, only the id 0 GPU is used. Is there any way to use two or more GPUs to accelerate the training since at present it will take about 20 hours for the training of the second stage for the h36m dataset? Thanks again for all your support.

You could try to add 'self.model = nn.DataParallel(self.model)' in Line1, and Line2, then run the command with multi-GPUs (I have tried it and it could work).

This is great. Thanks!

Dragon2938734 commented 4 months ago

BTW, could you please tell me about the training time of your experiments on a single 3090 GPU and h36m dataset as mentioned in the paper? On my side, this setting will cost about 30 hours in total.

On a single 3090 GPU, the time overheads are as follows: mesh ● 3dpw: 30min/epoch, about half a day ● h36m: 1h25min/epoch, about a day and a half pose ● 3dpw: 1h40min/epoch ● h36m: 25min/epoch

hello, i would ask the time that "img_db = joblib.load(db_file)" in the dataset.py when load h36m,since i run the code and find it stay in this step. thank you!