zhengyuf / IMavatar

Official repository for CVPR 2022 paper: I M Avatar: Implicit Morphable Head Avatars from Videos
MIT License
633 stars 61 forks source link

How to create a video like the example #26

Closed carlosedubarreto closed 2 years ago

carlosedubarreto commented 2 years ago

I'm loving your work. Its amazing.

I was able to create the ply files, but noticed that it seems to create a frame from 200 by 200 frames. Is there a setting that I can change or something else to create a frame by frame mesh so I can get an animation?

And how do you get the textures surface (thats not my main purpose, but if its not complicated to you to tell, I would love it too)

Thanks a lot for your time and hard work.

carlosedubarreto commented 2 years ago

Found it. Just change the subsample parameter to =1 on the config file

Here is an exemple I'm using


    test{
        sub_dir = [MOV_001]
        # img_res = [256, 256]
        img_res = [192, 192]
        # subsample=  200
        subsample=  1
    }
RAJA-PARIKSHAT commented 2 years ago

@carlosedubarreto how are you creating a frame-by-frame 3D mesh, can you describe in detail?

carlosedubarreto commented 2 years ago

Hello @RAJA-PARIKSHAT , tough question. ITs about a month the last time I used it So from memory I dont remember, but I took some notes., hope it helps.

There is just one problem, most of the notes are in portuguese (my native language)

And I tend to put the important note on top, probably the botton ones were what I started testing.


-- depois de ter o video cortado para apenas o rosto no blender, 512x512
entrar na pasta

D:\MOCAP\IMavatar\IMavatar\preprocess\submodules\MODNet

rodar
python -m demo.video_matting.custom.run --video D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\0001-0413.mp4 --result-type matte --fps 25

depois, colocar frame by frame

  echo $video_folder/$subject_name/"${array[0]}"/"image"
  mkdir -p $video_folder/$subject_name/"${array[0]}"/"image"
  ffmpeg -i $video_folder/"${array[0]}_cropped.mp4" -q:v 2 $video_folder/$subject_name/"${array[0]}"/"image"/"%d.png"
  mkdir -p $video_folder/$subject_name/"${array[0]}"/"mask"
  ffmpeg -i $video_folder/"${array[0]}_cropped_matte.mp4" -q:v 2 $video_folder/$subject_name/"${array[0]}"/"mask"/"%d.png"

--exportar video original
ffmpeg -i D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\0001-0413.mp4 -q:v 2 D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\image\%d.png

--exportar mascara
ffmpeg -i D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\0001-0413_matte.mp4 -q:v 2 D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\mask\%d.png

---- Rodar deca flame
cd \MOCAP\IMavatar\IMavatar\preprocess\submodules\DECA

criar pasta deca
tive que rodar
pip install kornia

original comando
python demos/demo_reconstruct.py -i $video_folder/$subject_name/"${array[0]}"/image --savefolder $video_folder/$subject_name/"${array[0]}"/"deca" --saveCode True --saveVis False --sample_step 1  --render_orig False

usar pytorch senao vai dar erro de compilacao
python demos/demo_reconstruct.py -i D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\image --savefolder D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\MOV_001\deca --saveCode True --saveVis False --sample_step 1  --render_orig False --rasterizer_type=pytorch3d

=um que gostei
python demos/demo_reconstruct.py -i D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\image --savefolder D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\MOV_001\deca --saveCode True --saveVis True --sample_step 1  --render_orig False --rasterizer_type=pytorch3d --saveObj D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\deca\obj

ir para a pasta preprocess
cd \MOCAP\IMavatar\IMavatar\preprocess

original
python keypoint_detector.py --path $video_folder/$subject_name/"${array[0]}"

python keypoint_detector.py --path D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001

rodar iris segmentation
python iris.py --path $video_folder/$subject_name/"${array}"

python iris.py --path D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001

ir pra pasta deca novamente
cd \MOCAP\IMavatar\IMavatar\preprocess\submodules\DECA

pip install torchfile

python optimize.py --path $video_folder/$subject_name/"${array}" --cx $cx --cy $cy --fx $fx --fy $fy --size $resize

python optimize.py --path D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001 --cx 261.442628 --cy 253.231895 --fx 1539.67462 --fy 1508.93280 --size 512

#### se nao tiver a clibracao da camera usar sem info pois ele pega o padrao
python optimize.py --path D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001

########Entratr na pasta facetorch
cd \MOCAP\IMavatar\IMavatar\preprocess\submodules\face-parsing.PyTorch

###original
python test.py --dspth $video_folder/$subject_name/"${array}"/image --respth $video_folder/$subject_name/"${array}"/semantic

python test.py --dspth D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\image --respth D:\MOCAP\IMavatar\IMavatar\data\datasets\carlos\carlos\MOV_001\semantic

###Training
entrar na pasta
cd \MOCAP\IMavatar\IMavatar\code

python scripts/exp_runner.py --conf ./confs/IMavatar_supervised.conf [--wandb_workspace IMavatar] [--is_continue]

#utilizado
python scripts/exp_runner.py --conf ./confs/IMavatar_supervised_carlos.conf
###continuando
python scripts/exp_runner.py --conf ./confs/IMavatar_supervised_carlos.conf --is_continue

###Evaluation
python scripts/exp_runner.py --conf ./confs/IMavatar_supervised.conf --is_eval

#utilizado
python scripts/exp_runner.py --conf ./confs/IMavatar_supervised_carlos.conf --is_eval