Open yourbikun opened 2 years ago
and I find if I use the default preprocess, and set --load_size 800 --crop_size 400,this error don't happend.
Sorry about that. There was some mismatch in typing interpolation methods between torchvision.transforms.InterpolationMode
and PIL.Image
. It's a new bug introduced as we tried to suppress warnings (PR #1414). I pushed the fix to the master branch (d5e62dd021d51b).
thanks
I have another question. I used your code in RTX3090 and set the batchsize to 4. In the past, I should increase the learning rate at the same time, but I did not see the tip document that doubled the learning rate. So, do I need to double the learning rate? If so, please ask me how to set it
You are right to point out the general rule of thumb about determining the learning rate. However, it's still empirical particularly for the GAN training. I'd just try it with the same learning rate, and also with a larger learning rate.
We actually recommend sticking to the batch size of 1, because overfitting is a big issue on Pix2Pix and CycleGAN. Note that the datasets we used in this work are quite small. To prevent overfitting, having a small batch size helps.
You are right. After I use a higher learning rate, the test effect is very poor when the loss has reached an ideal level. There are many noises in the pictures, and I perform well when I keep the learning rate unchanged in large batches. Of course, because the "scale_width" option cannot be used normally before, I use the default preprocess. I don't know whether it is related to it.
I need your help! I use training argument "python train.py --dataroot /gemini/data-1/render_data1 --name r_cyclegan --model cycle_gan --preprocess scale_width_and_crop --load_size 800 --crop_size 400", and then get this error.ValueError: Unknown resampling filter (InterpolationMode.BICUBIC). Use Image.NEAREST (0), Image.LANCZOS (1), Image.BILINEAR (2), Image.BICUBIC (3), Image.BOX (4) or Image.HAMMING (5)