neeek2303 / EMOPortraits

Official implementation of EMOPortraits: Emotion-enhanced Multimodal One-shot Head Avatars
311 stars 17 forks source link

missing training code / class data_dict['idt_embed_face_target'] = idt_embed_cycle_face (only exists in ipynb_checkpoints) #24

Open johndpope opened 2 weeks ago

johndpope commented 2 weeks ago

https://github.com/neeek2303/EMOPortraits/blob/93cd6a565ca611e13fe75fd98ef198a737fba9c4/models/.ipynb_checkpoints/volumetric_avatar-checkpoint.py#L1076

UPDATE

the losses/init.py needs this

from .warping_regularizer import WarpReg

but then it falls over - with this missing file - keys_best

  File "/media/oem/12TB/EMOPortraits/models/stage_1/volumetric_avatar/va.py", line 126, in __init__
    self.init_losses(args)
  File "/media/oem/12TB/EMOPortraits/models/stage_1/volumetric_avatar/va.py", line 286, in init_losses
    return init_losses(self, args)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/oem/12TB/EMOPortraits/models/stage_1/volumetric_avatar/va_losses_and_visuals.py", line 785, in init_losses
    obj.warp_reg_loss = losses.WarpReg(args)
                        ^^^^^^^^^^^^^^
AttributeError: module 'losses' has no attribute 'WarpReg'
(comfyui) ➜  EMOPortraits git:(feat/take2) ✗ python train.py --experiment_name Retrain_with_17_V1_New_rand_MM_SEC_4_drop_02_stm_10_CV_05_1_1 --dataset_name voxceleb2hq_pairs --dataset_name_test voxceleb2hq_pairs --num_gpus 1 --batch_size 2 --max_epochs 400 --image_size 512 --aug_warp_size 512 --vgg19_num_scales 4 --dis_num_scales 2 --gen_shd_max_iters 400000 --dis_shd_max_iters 400000 --logging_freq 10 --visuals_freq 200 --vgg19_weight 18 --gaze_weight 10 --vgg19_face 10 --perc_face_pars 0 --face_resnet 0 --feature_matching_weight 40 --resnet18_fv_mix 35 --vgg19_fv_mix 0.0 --norm_layer_type gn --use_seg True --pull_exp 1 --push_exp 1 --stm 10 --contrastive_exp 2 --contrastive_idt 0.0 --test_batch_size 4 --train_epoch_len 15000 --test_epoch_len 2000 --dis_num_blocks 4 --gen_opt_type adamw --dis_opt_type adamw --dis_beta1 0.5 --gen_beta1 0.5 --lpe_face_backbone resnet18 --dec_pred_seg False --use_back False --use_stylegan_d False --dis_stylegan_lr 0.0002 --use_ws True --separate_idt False --r1 2.0 --mix_losses_start 1 --contr_losses_start 1 --stylegan_weight 1.0 --use_masked_aug False --num_b_negs 1 --dec_max_channels 512 --dec_channel_mult 2 --enc_channel_mult 4 --gen_dummy_input_size 8 --gen_latent_texture_channels 96 --latent_volume_channels 96 --source_volume_num_blocks 3 --custom_test True --augment_geometric_train False --random_theta True --green True --old_mix_pose False --use_mix_mask True --w_eyes_loss_l1 500 --w_mouth_loss_l1 500 --w_ears_loss_l1 500 --normalize_losses True --use_tensor False --use_amp False --lpe_output_channels_expression 128 --use_ibug_mask False --checkpoint_freq 10 --print_norms True --print_model False --im_dec_num_lrs_per_resolution 2 --im_dec_ch_div_factor 1.5 --dec_num_blocks 6 --dec_use_adanorm False --emb_v_exp False --save_exp_vectors True --dec_no_detach_frec 1 --sec_dataset_every 4 --predict_target_canon_vol True --volumes_l1 0.5 --vol_loss_epoch 1 --vol_loss_grad 1 --dec_key_emb orig_d --detach_lat_vol -1 --aug_color_coef 10 --exp_dropout 0.2 --separate_stm True --bs_resnet18_fv_mix 2 --use_sec_dataset True
/media/oem/12TB/EMOPortraits/logs/Retrain_with_17_V1_New_rand_MM_SEC_4_drop_02_stm_10_CV_05_1_1/expression_vectors
Hybrid stages [True, True, True]
Loading checkpoint from: /media/oem/12TB/face_parsing/ibug/face_parsing/rtnet/weights/rtnet50-fcn-14.torch
bbbbbbbbbbb
29110272 6707587
Hybrid stages [True, True, True]
Loading checkpoint from: /media/oem/12TB/face_parsing/ibug/face_parsing/rtnet/weights/rtnet50-fcn-14.torch
SN applied to local_encoder_nw
SN applied to idt_embedder_nw
SN applied to expression_embedder_nw
SN applied to xy_generator_nw
SN applied to uv_generator_nw
SN applied to warp_embed_head_orig_nw
SN applied to volume_process_nw
SN applied to volume_source_nw
SN applied to decoder_nw
WS applied to local_encoder_nw
WS applied to idt_embedder_nw
WS applied to expression_embedder_nw
WS applied to xy_generator_nw
WS applied to uv_generator_nw
WS applied to warp_embed_head_orig_nw
WS applied to volume_process_nw
WS applied to volume_source_nw
WS applied to decoder_nw
Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off]
Loading model from: /home/oem/miniconda3/envs/comfyui/lib/python3.11/site-packages/lpips/weights/v0.1/alex.pth
Traceback (most recent call last):
  File "/media/oem/12TB/EMOPortraits/train.py", line 528, in <module>
    main(args)
  File "/media/oem/12TB/EMOPortraits/train.py", line 459, in main
    trainer = Trainer(args)
              ^^^^^^^^^^^^^
  File "/media/oem/12TB/EMOPortraits/train.py", line 102, in __init__
    data_module = importlib.import_module(f'datasets.{args.dataset_name_test}').DataModule(args)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/media/oem/12TB/EMOPortraits/datasets/voxceleb2hq_pairs.py", line 599, in __init__
    keys_i = pickle.load(open(f'{self.data_root}/{i}_lmdb/keys_best.pkl', 'rb'))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/fsx/VC2_HD_f/0_lmdb/keys_best.pkl'