Closed JiatengLiu closed 3 months ago
This problem seems to be caused by a bug in the loading of the ZJU-Mocap dataset, I will fix it soon. Besides, training with multiple GPUs has not yet been tested.
So I can still realize that all tasks can still be done on the single GeForce RTX 3090? By the way, I am trying to solve the problems encountered in parallel training of multiple GPUs. I will contact you when I have the results. Thanks:)
Yes, we used only one GPU for all tasks. The bug in loading the ZJU-Mocap dataset has been fixed, please try the new code.
I will retry later。 Thanks :)
Yes, we used only one GPU for all tasks. The bug in loading the ZJU-Mocap dataset has been fixed, please try the new code.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
This problem seems to be caused by a bug in the loading of the ZJU-Mocap dataset, I will fix it soon. Besides, training with multiple GPUs has not yet been tested.
I find another question about ZJU_Mocap, all mask images in the mask
folder of each characters in the dataset are black image(all values are zero). Do you know the reason about this?
There are small but non-zeros values in the mask images, so the images look very dark.
Sorry. I understand:)
There are small but non-zeros values in the mask images, so the images look very dark.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Sorry to bother you again, I was training on a single GeForce RTX 3090, but I had a problem with the first stage of training:
gpus
in ourgeometry_zju_377.yaml
file to[2,3]
and explicitly setdistributed
toTrue
, but the error was reported as follows. How can I solve it?By the way, I tried to fix the error by explicitly fixing some
os.environ
values, but this did not work. Maybe I set the value incorrectly.