Open rhebbalaguppe opened 1 year ago
Hi do you mean the second row of figure 1? I think that's the only example driven by animation from DeformingThings.
Hi, Thanks for sharing the code and training model. The following are unclear:
*_{pred_flow,src_vtx,tar_pts,tar_vtx}.npy
evaluate/visualize_tracking.py
for DeformingThings4D ? How to reproduce qualitative results for:
ModelsResources (Fig-4) : We generated the results on the test set but the retargetted motion looks different then those present in the paper. I have attached 2 videos for references. https://user-images.githubusercontent.com/22934809/210411204-ecb0d129-4daa-4cee-8292-6db02a52d079.mp4 https://user-images.githubusercontent.com/22934809/210411245-409bebab-173f-4f3e-8d9a-1c427de43374.mp4
DeformingThings4D (Fig-1 bottom row): Can provide the names of the files used from the dataset?
DFaust + KillingFusion (Fig-7) : How mesh_simplification.obj
is created and what timesteps were used for creating motion.npy
? Or can you share the pre-processed files
Hello,
1.1-1.2 For DeformingThings4D, we only use it to train the correspondence and deformation modules. We didn't evaluate the rigging and animation steps on it. You can optionally evaluate the deformation performance, i.e., scene flow prediction, on it. To do this, as your already did, you can add pred_flow to src_vtx, and compare with tar_vtx with MSE. 1.3 visualize_tracking.py is used to visualize the final animation after IK, so no need to use this.
2.1 I think your videos look good. Seems the predicted deformation more or less similar to the GT deformation. Do you mean the animation in our video demo? For our demo, we use motions from Mixamo because they look more natural. The motions you've seen here are our synthetic motion.
2.2 Fig-1 bottom row: Input mesh is goatS4J6Y, Motion is bucksYJL_GetHit2.
2.3 Similar to RigNet, we use open3D 0.9.0 to get the simplified mesh as below
mesh_simplify = mesh.simplify_quadric_decimation(5000)
3.1 We use pyrender to simulate partial scan and synthesize point cloud sequence. The number of points per frame is constrained to 1K. You can take a look at my unorganized headless-scan rendering script to get a sense about this process here. To process mesh, similar to RigNet, we just simplify them by "simplify_quadric_decimation" to have vertices between 1K-5K. 3.2 dataloader is bit relied on pytorch-geometric library, especially the batching mechanism. You might take a look at the library as well as the scripts in "datasets" folder. 3.3 If the shape of the reference character in the point cloud is different from the target mesh, you will need to first align the mesh to the shape of reference character. The checkpoint deform_s_mr is trained for this. train_deform_shape.py is the script to train it. You can use it to output flow to align these two. If the shapes of the reference character and target mesh are the same, you will need to use deform_p_mr to get vertex trajectories first, and then use jointnet, masknet, skinnet, rootnet, bonenet to get the rig.The steps to get rig is similar to RigNet. I will upload a demo script with those steps. 3.4 We use open3D to visualize. There are some visualization scripts in evaluate folder for some steps. Some visualization functions are in utils/vis_utils.py.
Hi, Thanks for the prompt reply.
yes, mixamo has automatic motion retargeting, but for humanoid only. That's why we mostly show humanoid animation in the demo. You can try "upload character" in mixamo website.
BTW, I think there is a typo in the command python -u training/train_rig.py \ --arch="jointnet_motion" \ --train_folder="$DATASET_PATH/ModelsReources/train/" \ --val_folder="DATASET_PATH/ModelsReources/val/" \ --test_folder="DATASET_PATH/ModelsReources/test/" \ --train_batch=4 --test_batch=4 \ --logdir="logs/jointnet_motion" \ --checkpoint="checkpoints/jointnet_motion" \ --lr=5e-4 --schedule 40 80 --epochs=120 - shouldn't the foldername be ModelsResources ?
thank you. Corrected.
Thank you for providing the paper and the code for training. I would like to inquire about the preprocessing of .ply files to extract the necessary data for rigging and tracking.
Upon reviewing the code, I noticed that it utilizes the _vtx_traj.npy and _pts_traj.npy files as sources for trajectories. I am interested in reproducing the results using real data on a prepared mesh.
It would be greatly appreciated if you could share your code for generating the complete dataset. Thank you for your assistance.
Can you share a demo script to evaluate on DeformingThings4D dataset? Or can you explain the changes that need to be made to the commands or files?
Also, can you share the filenames from ModelsResources dataset shown in the main paper