Closed merlinyx closed 1 year ago
I have a few other questions related to preprocessing:
.npy
files in parsing_SCH_ATR
folder? I can generate the pngs without issues but wasn't sure what the .npy
files should contain. They are required by the dataset.py
and some other processing files like parsing_mask_to_fl.py
.featurelines/*.json
files generated? I could not find any information about them. They seem to be related to files generated in mask2fl/
by parsing_mask_to_fl.py
but they are also required by this python script../female_large_pose/$1/joints2d/2D_pose.json
in resize_video_imgs.sh
? I tried putting in the pose.json output I got from the lightweight-human-pose-estimation.pytorch and but it seems to be expecting a dictionary that has parts as keys? Could you point me to an example format of this file? smpl_rec.npz
but just the tcmr_output.pkl
? I was able to run tcmr without trouble for pose sequence estimation.Thanks!
Yes, the videoavatar is built on python2.7 using conda environment. My email is 220019047@link.cuhk.edu.cn
A1: I parse human semantic mask by self-correction-human-parsing. Next we save semantic category/label into
*.npy
A2: We generatefeaturelines/*.json
, using a garment landmark estimator trained on deepfashion2. A3:./female_large_pose/$1/joints2d/2D_pose.json
is obtained by openpose lib. A4: Of course, you can run REC-MV withoutsmpl_rec.npz
. But,tcmr_output.pkl
sometime is unstable compared tosmpl_rec.npz
, so you need add some smooth function to refine it.
I see, that makes sense! I'll reach out with some follow-ups but these are very helpful!
Hi, thanks to the author for excellent work and timely open source! But I have some questions. Q1 According to Hugging Face, is there a missing script to generate FeatureLines and Mask2FL? Q2, for long-sleeved short-sleeved long-pants and shorts, do we also need to initialize the Voxel Skinning Weights.? Q3, for the cape dataset, only the large pose on the front of the subject, can REC-MV handle this situation?
@xz-pisces for Q1, I reached out about the featureline and indeed it hasn't been released but the author told me that the feature line predictor can be trained using https://github.com/Mamba-ZJP/deepfashion2-kps-agg-81 (which I did not try to do because the author also said he was planning to release a pretrained version).
Hi, any updates on Q1?
i tried the deefashion2-kps-agg-81 the pretrained model itsnt event saved properly it gives errors while loading https://github.com/GAP-LAB-CUHK-SZ/REC-MV/issues/8
@merlinyx hi, have you got the pretrained version, thx
Hi! I wanted to follow up on the preprocessing; I tried following the steps detailed in the README.md and most of them went through just fine but I'm stuck at the videoavatars step1_pose.py. I think chumpy might have something incompatible with python 3 which caused strange errors that I cannot just fix based on the error messages. I tried using python 2.7 but the environment setup itself ran into errors. Could you let me know how did you setup the environment for running videoavatars to generate the pose sequence? Did you have to modify any chumpy source code? Thank you! (videoavatars repo doesn't seem to allow posting issues...)
By the way, @lingtengqiu could you let me know your email address? I think I might have some further questions that would be easier if we can communicate through email! (the one from your homepage bounces back for me) Thanks!