nefeliandreou / PoseRepresentation

Official implementation of dual quaternion transformations as described in the paper "Pose Representations for Deep Skeletal Animation".
https://nefeliandreou.github.io/projects/PoseRepresentation/
MIT License
34 stars 2 forks source link

json file and model #1

Closed luoww1992 closed 1 year ago

luoww1992 commented 1 year ago

where is the json and model file ?

nefeliandreou commented 1 year ago

Hi! For the model we have adopted the following codebases https://github.com/papagina/Auto_Conditioned_RNN_motion and https://github.com/facebookresearch/QuaterNet/tree/main. Which json are you referring to?

luoww1992 commented 1 year ago

Q1: these are many json files, what the different is ? different action ? and how to get it ? Q2: the skeleton looks like smplh ? the input bvh file skeleton must have the same skeleton hierarchy as target skeleton ? Q2.1 if is not smplh, which skeleton type ? Q3: how to dispose the bvh file, get 3D world position ? ----- 原始邮件 ----- 发件人:nefeliandreou @.> 收件人:nefeliandreou/PoseRepresentation @.> 抄送人:luoww1992 @.>, Author @.> 主题:Re: [nefeliandreou/PoseRepresentation] json file and model (Issue #1) 日期:2023年06月28日 15点22分

Hi! For the model we have adopted the following codebases https://github.com/papagina/Auto_Conditioned_RNN_motion and https://github.com/facebookresearch/QuaterNet/tree/main. Which json are you referring to?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

luoww1992 commented 1 year ago

what are your libs:code1, rhythmdata and rhythmnet ? i find some , but looks like wrong, can you share your libs url ? ----- 原始邮件 ----- 发件人:nefeliandreou @.> 收件人:nefeliandreou/PoseRepresentation @.> 抄送人:luoww1992 @.>, Author @.> 主题:Re: [nefeliandreou/PoseRepresentation] json file and model (Issue #1) 日期:2023年06月28日 15点22分

Hi! For the model we have adopted the following codebases https://github.com/papagina/Auto_Conditioned_RNN_motion and https://github.com/facebookresearch/QuaterNet/tree/main. Which json are you referring to?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

nefeliandreou commented 1 year ago
  1. Different json files correspond to different representations, loss weights and so on that you can build depending on the model you want to test out.
  2. The template skeleton is not SMPLH. We are using data from the CMU dataset, therefore we use the template bvh from there. You can find more information about which subjects we use exactly in the paper.
  3. You can extract the 3D position from the dual-quaternions (with current coordinate system) using this function.

You might have to restructure the eval.py file so that you can use the functions from dualquats by replacing from code1.dualquats import currentdq2localquats with from dualquats import currentdq2localquats. You will have to also define your dataloader and network init functions (I can help you on that). In this codebase, we just provide the tools to convert between the representations.

luoww1992 commented 1 year ago

1, can you share me a json file and dataset, i want to edit eval.py by it. and needs libs: code1, rhythmdata and rhythmnet 2, the input bvh file skeleton must have the same skeleton hierarchy as target skeleton ? 3, can i use smpl24 skts to retarget the retarget skt ? if yes, how to edit one-to-one skeleton index or other code ? ----- 原始邮件 ----- 发件人:nefeliandreou @.> 收件人:nefeliandreou/PoseRepresentation @.> 抄送人:luoww1992 @.>, Author @.> 主题:Re: [nefeliandreou/PoseRepresentation] json file and model (Issue #1) 日期:2023年06月28日 17点04分

Different json files correspond to different representations, loss weights and so on that you can build depending on the model you want to test out. The template skeleton is not SMPLH. We are using data from the CMU dataset, therefore we use the template bvh from there. You can find more information about which subjects we use exactly in the paper. You can extract the 3D position from the dual-quaternions (with current coordinate system) using this function.

You might have to restructure the eval.py file so that you can use the functions from dualquats by replacing

from code1.dualquats import currentdq2localquats with from dualquats import currentdq2localquats. You will have to also define your dataloader and network init functions (I can help you on that). In this codebase, we just provide the tools to convert between the representations.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

nefeliandreou commented 1 year ago
  1. You can acquire the dataset from here. The json file will depend on the dataloader and model initiator functions that you will define. I can share with you the ones I used, but you will have to modify it depending on your functions.
  2. Yes, input and target must be the same.
  3. Potentially yes. Please look into this.
luoww1992 commented 1 year ago

i have download a dataset, but this is only some bvh files, can you give me a sample of your dataset with json and other libs ? i am editing... ----- 原始邮件 ----- 发件人:nefeliandreou @.> 收件人:nefeliandreou/PoseRepresentation @.> 抄送人:luoww1992 @.>, Author @.> 主题:Re: [nefeliandreou/PoseRepresentation] json file and model (Issue #1) 日期:2023年06月28日 17点28分

You can acquire the dataset from here. The json file will depend on the dataloader and model initiator functions that you will define. I can share with you the ones I used, but you will have to modify it depending on your functions. Yes, input and target must be the same. Potentially yes. Please look into this.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

nefeliandreou commented 1 year ago

My dataset is the exact sample of bvh files that I sent you. I will try to provide you with a json later today. The libraries that relate to model and data loading you will have to define them yourself.

nefeliandreou commented 1 year ago

A sample config file looks like this: { "visdom_title": "v1-test", "model": "dq_acLSTM", "acRNN_model": "vanilla", "data_type": "dualquats",

"paired_dir": , "test_paired_dir": , "validation_paired_dir": ,

"order": "zyx", "fps_current":120, "fps_target":30, "QRL_local": false, "QRL_curr": false, "DL": true, "BL": false, "ortho6D_loss": false, "mse_loss":true, "push_seq":"86_06",

"save_bvh_dir": , "save_weights_dir": , "train_read_weight_path": "", "bvh_skeleton_file_name": ,

"save_weights_every": 10000, "save_bvh_every": 1000, "norm_type": "all", "foot_contact": false,

"test_generate_seq_num": 3000, "test_save_bvh_dir": , "test_read_weight_path": } As I said before, this is specific to our network initialization function.