Open JhEdLp opened 12 months ago
Hello @JhEdLp Hope you're doing great
Did you manage running this repo? can it be run fully ? I did give a try yet so I was just asking if you did. thanks for the response in advance.
Hi @jasuriy,
Yes, I run this repo, but only to inference. Over a custom images set. If you have any inconvenience let me know, and if I can, I will help you.
hi, @JhEdLp , I would like to ask, how do you customize the dataset?
Hello @JhEdLp ,do I need to generate latent codes first when using other datasets? Should I use the e4e model to generate them? The latent codes generated by e4e have a dimension of 18, which conflicts with the password dimension of 14. I'm not sure what's going on.
@JhEdLp hi when I try to inference I am getting this error: Segmentation fault (core dumped)
could you please help with it? thank you in advance.
python coach_test.py /home/jasurbek/Documents/projects/RiDDLE loading e4e pretrained_models/e4e_ffhq_encode.pt {'exp_dir': None, 'dataset_type': 'ffhq_encode', 'encoder_type': 'Encoder4Editing', 'batch_size': 8, 'test_batch_size': 4, 'workers': 8, 'test_workers': 4, 'learning_rate': 0.0001, 'optim_name': 'ranger', 'train_decoder': False, 'start_from_latent_avg': True, 'lpips_lambda': 0.8, 'id_lambda': 0.1, 'l2_lambda': 1.0, 'stylegan_weights': 'pretrained_models/stylegan2-ffhq.pkl', 'stylegan_size': 1024, 'checkpoint_path': 'pretrained_models/e4e_ffhq_encode.pt', 'max_steps': 300000, 'image_interval': 100, 'board_interval': 50, 'val_interval': 10000, 'save_interval': 200000, 'w_discriminator_lambda': 0.1, 'w_discriminator_lr': 2e-05, 'r1': 10, 'd_reg_every': 16, 'use_w_pool': True, 'w_pool_size': 50, 'sub_exp_dir': None, 'delta_norm': 2, 'delta_norm_lambda': 0.0002, 'keep_optimizer': False, 'resume_training_from_ckpt': None, 'update_param_list': None, 'device': 'cuda:0', 'lpips_type': 'alex', 'progressive_steps': [0, 20000, 22000, 24000, 26000, 28000, 30000, 32000, 34000, 36000, 38000, 40000, 42000, 44000, 46000, 48000, 50000, 52000], 'progressive_start': 20000, 'progressive_step_every': 2000} Loading e4e over the pSp framework from checkpoint: pretrained_models/e4e_ffhq_encode.pt loaded e4e from pretrained_models/e4e_ffhq_encode.pt e4e stylegan is from pretrained_models/stylegan2-ffhq.pkl Segmentation fault (core dumped)
I know that MtCNN return 5 key points, but do you don't specify, if for Dlib results in the Table3, uses all 68 points. In #8 or document is no clear how many facial landmarks from Dlib are used.