iPERDance / iPERCore

Liquid Warping GAN with Attention: A Unified Framework for Human Image Synthesis
https://iperdance.github.io/work/impersonator-plus-plus.html
Apache License 2.0
2.43k stars 316 forks source link

WHERE IS THE HERO #155

Closed mioyeah closed 1 year ago

mioyeah commented 1 year ago

F:\ProgramData\Anaconda3\envs\iPERDance\python.exe D:/pycharmProject/iPERCore/train/dist_train.py 100%|██████████| 5/5 [00:01<00:00, 3.08it/s] Dataset VideoDataset was created. 100%|██████████| 2/2 [00:00<00:00, 3.46it/s] Dataset VideoDataset was created. Network AttLWB-SPADE was created Network patch_global_body_head was created Loading vgg19 from ../assets/checkpoints/losses/vgg19-dcbb9e9d.pth... Loading face model from ../assets/checkpoints/losses/sphere20a_20171020.pth

train video clips = 5506

test video clips = 2012

{'MAX_NUM_SOURCE': 8, 'NUMBER_FACES': 13776, 'NUMBER_VERTS': 6890, 'Train': {'D_adam_b1': 0.9, 'D_adam_b2': 0.999, 'G_adam_b1': 0.9, 'G_adam_b2': 0.999, 'aug_bg': True, 'display_freq_s': 600, 'face_factor': 1.0, 'face_loss_path': '../assets/checkpoints/losses/sphere20a_20171020.pth', 'final_lr': 1e-05, 'lambda_D_prob': 1.0, 'lambda_face': 5.0, 'lambda_mask': 5.0, 'lambda_mask_smooth': 1.0, 'lambda_rec': 10.0, 'lambda_tsf': 10.0, 'lr_D': 0.0001, 'lr_G': 0.0001, 'niters_or_epochs_decay': 0, 'niters_or_epochs_no_decay': 400000, 'num_iters_validate': 1, 'opti': 'Adam', 'print_freq_s': 180, 'save_latest_freq_s': 7200, 'tb_visual': True, 'train_G_every_n_iterations': 1, 'use_face': True, 'use_vgg': 'VGG19', 'vgg_loss_path': '../assets/checkpoints/losses/vgg19-dcbb9e9d.pth'}, 'background_dir': '/p300/tpami/places', 'batch_size': 1, 'bg_ks': 11, 'cfg_path': '../assets/configs/trainers/train_aug_bg.toml', 'conf_erode_ks': 3, 'dataset_dirs': ['../scripts/train/datasets_reproduce/iPER'], 'dataset_mode': 'ProcessedVideo', 'digital_type': 'cloth_smpl_link', 'dis_name': 'patch_global_body_head', 'face_path': '../assets/checkpoints/pose3d/smpl_faces.npy', 'facial_path': '../assets/configs/pose3d/front_facial.json', 'fim_enc_path': '../assets/configs/pose3d/mapper_fim_enc.txt', 'front_path': '../assets/configs/pose3d/front_body.json', 'ft_ks': 1, 'gen_name': 'AttLWB-SPADE', 'gpu_ids': ['0'], 'head_path': '../assets/configs/pose3d/head.json', 'image_size': 512, 'intervals': 1, 'ip': '', 'is_train': True, 'load_iter': 0, 'load_path_D': 'None', 'load_path_G': 'None', 'local_rank': 0, 'map_name': 'uv_seg', 'meta_data': {'checkpoints_dir': './output_dir\models\AttLWB_iPER', 'opt_path': './output_dir\models\AttLWB_iPER\opts.txt', 'personalized_ckpt_path': './output_dir\models\AttLWB_iPER\personalized.pth', 'root_primitives_dir': './output_dir\primitives'}, 'model_id': 'AttLWB_iPER', 'neural_render_cfg': {'Discriminator': {'bg_cond_nc': 4, 'cond_nc': 6, 'max_nf_mult': 8, 'n_layers': 4, 'name': 'patch_global', 'ndf': 64, 'norm_type': 'instance', 'use_sigmoid': False}, 'Generator': {'BGNet': {'cond_nc': 4, 'n_res_block': 6, 'norm_type': 'instance', 'num_filters': [64, 128, 128, 256]}, 'SIDNet': {'cond_nc': 6, 'n_res_block': 6, 'norm_type': 'None', 'num_filters': [64, 128, 256]}, 'TSFNet': {'cond_nc': 6, 'n_res_block': 6, 'norm_type': 'instance', 'num_filters': [64, 128, 256]}, 'name': 'AttLWB-SPADE'}}, 'neural_render_cfg_path': '../assets/configs/neural_renders/AttLWB-SPADE.toml', 'num_source': 4, 'num_workers': 1, 'only_vis': False, 'out_dilate_ks': 9, 'output_dir': './output_dir', 'part_path': '../assets/configs/pose3d/smpl_part_info.json', 'port': 0, 'serial_batches': False, 'share_bg': True, 'smpl_model': '../assets/checkpoints/pose3d/smpl_model.pkl', 'smpl_model_hand': '../assets/checkpoints/pose3d/smpl_model_with_hand_v2.pkl', 'tb_visual': True, 'temporal': False, 'tex_size': 3, 'time_step': 2, 'train_name': 'LWGAugBGTrainer', 'use_cudnn': False, 'use_inpaintor': False, 'uv_map_path': '../assets/configs/pose3d/mapper_uv.txt', 'verbose': False} Traceback (most recent call last): File "F:\ProgramData\Anaconda3\envs\iPERDance\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "F:\ProgramData\Anaconda3\envs\iPERDance\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "D:\pycharmProject\iPERCore\iPERCore\services\train.py", line 252, in Train(cfg) File "D:\pycharmProject\iPERCore\iPERCore\services\train.py", line 36, in init self._train() File "D:\pycharmProject\iPERCore\iPERCore\services\train.py", line 162, in _train self._model.set_input(train_batch, self._device) File "D:\pycharmProject\iPERCore\iPERCore\tools\trainers\lwg_trainer.py", line 402, in set_input aug_bg = inputs["bg"].to(device, non_blocking=True) KeyError: 'bg' Traceback (most recent call last): File "F:\ProgramData\Anaconda3\envs\iPERDance\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "F:\ProgramData\Anaconda3\envs\iPERDance\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "F:\ProgramData\Anaconda3\envs\iPERDance\lib\site-packages\torch\distributed\launch.py", line 261, in main() File "F:\ProgramData\Anaconda3\envs\iPERDance\lib\site-packages\torch\distributed\launch.py", line 257, in main cmd=cmd) subprocess.CalledProcessError: Command '['F:\ProgramData\Anaconda3\envs\iPERDance\python.exe', '-u', '-m', 'iPERCore.services.train', '--local_rank=0', '--gpu_ids', '0', '--dataset_dirs', '../scripts/train/datasets_reproduce/iPER', '--background_dir', '/p300/tpami/places', '--dataset_mode', 'ProcessedVideo', '--cfg_path', '../assets/configs/trainers/train_aug_bg.toml']' returned non-zero exit status 1.

Process finished with exit code 0

Can I train on windows

mioyeah commented 1 year ago

Are there heroes willing to share the datasets after preprocessing?

mioyeah commented 1 year ago

aug_bg = inputs["bg"].to(device, non_blocking=True) Can someone tell me what bg' means?

mioyeah commented 1 year ago

Did anyone succeed in the training?