liyunsheng13 / BDL

MIT License
222 stars 30 forks source link

First Image Transltation #37

Open JialeTao opened 4 years ago

JialeTao commented 4 years ago

Hi, I have read the paper and still have som problems.

  1. CycleGAN is trained with a perceptual loss. Does the first image translation use the perceptual loss? If so, which segmentation model parameters is used. Is the source only model with 33.6 mIoU in the paper?

  2. With first traslated images in hand, when starting the adversarial training of the segmentation model, is the initial model parameters the ImageNet pretrained parameters or the source-image pretrained parameters with mIoU of 33.6?

liyunsheng13 commented 4 years ago

For your first question, I tried both scenarios and get similar results. Thus, you can train CycleGAN without perceptual loss. For the second one, the model is pertained with ImageNet.

JialeTao commented 4 years ago

For your first question, I tried both scenarios and get similar results. Thus, you can train CycleGAN without perceptual loss. For the second one, the model is pertained with ImageNet.

Thanks for quick reply! And I found it very slow to train cyclegan with perceptual loss(it may takes around a month in my situation, I mentioned this under another question ). So I'm suprised that you just spent 4 days. Did you use single GPU? or multi GPUS?

liyunsheng13 commented 4 years ago

I use 4 gpus

JialeTao commented 4 years ago

I use 4 gpus

Thanks, that might be normal. The GPU I used is not compatible to teslaV100. Is it convenient for you to upload the first translation image? Anyway, It's also ok for not. While you train cyclegan with larger batchsize, is the initial learning rate you used same with standard cyclegan ?

liyunsheng13 commented 4 years ago

You can train with less epochs. I upload the parameters I use. You can refer to it.

JialeTao commented 4 years ago

You can train with less epochs. I upload the parameters I use. You can refer to it. Thanks very much! I've seen it.