-
Dear,
I saw three values in printing gan-loss:
[0.020138385, 0.018311221, 1.8271642]
Could you please tell me what loss they are represented ?
Best
-
Dear authors, thank you for making this great work public.
I have been finetuning Big-LaMa on my own data and my own mask generation, and I would love to hear your advice on how to finetune it in t…
vkhoi updated
7 months ago
-
Sorry for bothering you. My PSNR after reproduction is only 30.48, I think I didn't change the code correctly, may I ask how the loss function is set at the very beginning of training?
-
Hi there, thanks for the excellent work. I am trying to use the vqvae to demonstrate face images. But the results are blurry. Is there any suggestion for this? For example, can I add perceptual loss o…
-
Hi, I'm posting a new issue since other one is closed.
I'm trying this exact scenario of retraining the sdxl vae. I've made the changes indicated to create a config file that looks like below so th…
-
https://arxiv.org/pdf/1703.10593.pdf
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a …
leo-p updated
7 years ago
-
https://github.com/krasserm/super-resolution/blob/master/train.py#L132
The GAN model training is long, and easily dies. Without checkpoints, everything has to restart. Thanks.
-
I'm trying to train Real-ESRGAN with 400 images.
The training curves are very noisy, especially the perceptual loss (l_g_precep).
I've tried to use various different learning rates.
I'm using 4 GP…
-
when i run this command
python main.py --input_dir ffhq_image --im_path1 source.png --im_path2 target.png --output_dir style_your_hair_output --warp_loss_with_prev_list delta_w style_hair_slic_large …
-
Please all,
I need the code of implementation this part
the part is
{The SRResNet networks
were trained with a learning rate of 10−4 and 106 update
iterations. We employed the trained MSE-bas…