GuyTevet / MotionCLIP

Official Pytorch implementation of the paper "MotionCLIP: Exposing Human Motion Generation to CLIP Space"
MIT License
401 stars 38 forks source link

How do I train my own MotionCLIP? #42

Open Omair-S opened 2 days ago

Omair-S commented 2 days ago

In the readme section of this project: To reproduce paper-model run:

python -m src.train.train --clip_text_losses cosine --clip_image_losses cosine --pose_rep rot6d \
--lambda_vel 100 --lambda_rc 100 --lambda_rcxyz 100 \
--jointstype vertices --batch_size 20 --num_frames 60 --num_layers 8 \
--lr 0.0001 --glob --translation --no-vertstrans --latent_dim 512 --num_epochs 100 --snapshot 10 \
--device <GPU DEVICE ID> \
--dataset amass \
--datapath ./data/amass_db/amass_30fps_db.pt \
--folder ./exps/my-paper-model

I just pasted this entire thing into my terminal (with the motionclip environment activated), but that didn't work, and I have no idea where to go from here. The error descriptions don't really seem to help. I have already set up the motionclip files, and I'm able to run the pre-trained motionclip, i.e., I'm able to generate .gifs corresponding to text descriptions. However, running this script is giving the following errors:

Screenshot 2024-10-14 125223

Any help would be really appreciated! Thanks for reading this far!

Omair-S commented 2 days ago

Well, I've made some progress, '\' symbol doesn't work for line continuation in Windows, instead '^' symbol is to be used. Also instead of , I obv had to replace it with the actual ID. Using --device 0 now. This might work!