gouayao / MCL

2 stars 0 forks source link

!! URGENT !! sinMCL error #2

Closed tusharrewatkar closed 1 month ago

tusharrewatkar commented 1 month ago

python train.py --dataroot ./datasets/UAV_Synthetic --name UAV_sinmcl --model sinmcl --gpu_ids 0
----------------- Options --------------- IMCL_mode: IMCL batch_size: 16 beta1: 0.0 beta2: 0.99 checkpoints_dir: ./checkpoints continue_train: False crop_size: 64 dataroot: ./datasets/UAV_Synthetic [default: placeholder] dataset_mode: singleimage direction: AtoB display_env: main display_freq: 400 display_id: None display_ncols: 4 display_port: 8097 display_server: http://localhost/ display_winsize: 256 easy_label: experiment_name epoch: latest epoch_count: 1 evaluation_freq: 5000 flip_equivariance: False gan_mode: nonsaturating gpu_ids: 0 init_gain: 0.02 init_type: xavier input_nc: 3 isTrain: True [default: None]
lambda_GAN: 1.0 lambda_NCE: 4.0 lambda_R1: 1.0 lambda_identity: 1.0 lbd: 0.01 load_size: 1024 lr: 0.002 lr_decay_iters: 50 lr_policy: linear max_dataset_size: inf model: sinmcl [default: cut]
n_epochs: 8 n_epochs_decay: 8 n_layers_D: 3 name: UAV_sinmcl [default: experiment_name] nce_T: 0.07 nce_idt: True nce_includes_all_negatives_from_minibatch: True nce_layers: 0,2,4 ndf: 8 netD: stylegan2 netF: mlp_sample netF_nc: 256 netG: stylegan2 ngf: 10 no_antialias: False no_antialias_up: False no_dropout: True no_flip: False no_html: False normD: instance normG: instance num_patches: 1 num_threads: 4 output_nc: 3 phase: train pool_size: 0 preprocess: zoom_and_patch pretrained_name: None print_freq: 100 random_scale_max: 3.0 save_by_iter: False save_epoch_freq: 1 save_latest_freq: 20000 serial_batches: False stylegan2_G_num_downsampling: 1 suffix: temp: 0.1 update_html_freq: 1000 verbose: False ----------------- End ------------------- Image sizes (1920, 1080) and (1920, 1080) dataset [SingleImageDataset] was created model [SinMCLModel] was created The number of training images = 100000 Setting up a new session... create web directory ./checkpoints\UAV_sinmcl\web... Traceback (most recent call last): File "train.py", line 43, in model.data_dependent_initialize(data) File "D:\rewa_tu\MCL-master\models\imcl_model.py", line 118, in data_dependent_initialize self.compute_D_loss().backward() # calculate gradients for D
File "D:\rewa_tu\MCL-master\models\sinmcl_model.py", line 66, in compute_D_loss
GAN_loss_D = super().compute_D_loss() File "D:\rewa_tu\MCL-master\models\imcl_model.py", line 182, in compute_D_loss
mcl_fake = F.normalize(pred_fake.view(-1, 30)) RuntimeError: shape '[-1, 30]' is invalid for input of size 16

Im using sinMCL model, the crop size is already 64. and it is not working with 256 as well.

Can you please tell me how to resolve this, as I want this urgent?

gouayao commented 1 month ago

When the SinMCL is used,You should modify the size of the discriminator output layer matrix (not 30*30) to fit the input image.

---- Replied Message ---- | From | Tushar @.> | | Date | 06/09/2024 07:17 | | To | @.> | | Cc | @.***> | | Subject | [gouayao/MCL] !! URGENT !! sinMCL error (Issue #2) |

python train.py --dataroot ./datasets/UAV_Synthetic --name UAV_sinmcl --model sinmcl --gpu_ids 0 ----------------- Options --------------- IMCL_mode: IMCL batch_size: 16 beta1: 0.0 beta2: 0.99 checkpoints_dir: ./checkpoints continue_train: False crop_size: 64 dataroot: ./datasets/UAV_Synthetic [default: placeholder] dataset_mode: singleimage direction: AtoB display_env: main display_freq: 400 display_id: None display_ncols: 4 display_port: 8097 display_server: http://localhost/ display_winsize: 256 easy_label: experiment_name epoch: latest epoch_count: 1 evaluation_freq: 5000 flip_equivariance: False gan_mode: nonsaturating gpu_ids: 0 init_gain: 0.02 init_type: xavier input_nc: 3 isTrain: True [default: None] lambda_GAN: 1.0 lambda_NCE: 4.0 lambda_R1: 1.0 lambda_identity: 1.0 lbd: 0.01 load_size: 1024 lr: 0.002 lr_decay_iters: 50 lr_policy: linear max_dataset_size: inf model: sinmcl [default: cut] n_epochs: 8 n_epochs_decay: 8 n_layers_D: 3 name: UAV_sinmcl [default: experiment_name] nce_T: 0.07 nce_idt: True nce_includes_all_negatives_from_minibatch: True nce_layers: 0,2,4 ndf: 8 netD: stylegan2 netF: mlp_sample netF_nc: 256 netG: stylegan2 ngf: 10 no_antialias: False no_antialias_up: False no_dropout: True no_flip: False no_html: False normD: instance normG: instance num_patches: 1 num_threads: 4 output_nc: 3 phase: train pool_size: 0 preprocess: zoom_and_patch pretrained_name: None print_freq: 100 random_scale_max: 3.0 save_by_iter: False save_epoch_freq: 1 save_latest_freq: 20000 serial_batches: False stylegan2_G_num_downsampling: 1 suffix: temp: 0.1 update_html_freq: 1000 verbose: False ----------------- End ------------------- Image sizes (1920, 1080) and (1920, 1080) dataset [SingleImageDataset] was created model [SinMCLModel] was created The number of training images = 100000 Setting up a new session... create web directory ./checkpoints\UAV_sinmcl\web... Traceback (most recent call last): File "train.py", line 43, in model.data_dependent_initialize(data) File "D:\rewa_tu\MCL-master\models\imcl_model.py", line 118, in data_dependent_initialize self.compute_D_loss().backward() # calculate gradients for D File "D:\rewa_tu\MCL-master\models\sinmcl_model.py", line 66, in compute_D_loss GAN_loss_D = super().compute_D_loss() File "D:\rewa_tu\MCL-master\models\imcl_model.py", line 182, in compute_D_loss mcl_fake = F.normalize(pred_fake.view(-1, 30)) RuntimeError: shape '[-1, 30]' is invalid for input of size 16

Im using sinMCL model, the crop size is already 64. and it is not working with 256 as well.

Can you please tell me how to resolve this, as I want this urgent?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>

tusharrewatkar commented 1 month ago

Thanks for the response. But im unable to understand the discriminator output layer?

Which parameter i have to change to achieve this?

gouayao commented 1 month ago

F.normalize(pred_fake.view(-1, 30)).

If you choose a input image with 256*256, maybe (-1, 16) or (-1, 14) is useful.

---- Replied Message ---- | From | Tushar @.> | | Date | 06/09/2024 08:56 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [gouayao/MCL] !! URGENT !! sinMCL error (Issue #2) |

Thanks for the response. But im unable to understand the discriminator output layer?

Which parameter i have to change to achieve this?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

tusharrewatkar commented 1 month ago

(CUT) D:\rewa_tu\MCL-master>python train.py --dataroot ./datasets/UAV_Synthetic --name UAV_sinmcl --model sinmcl --gpu_ids 0 --crop_size 256 ----------------- Options --------------- IMCL_mode: IMCL batch_size: 16 beta1: 0.0 beta2: 0.99 checkpoints_dir: ./checkpoints continue_train: False crop_size: 256 [default: 64]
dataroot: ./datasets/UAV_Synthetic [default: placeholder] dataset_mode: singleimage direction: AtoB display_env: main display_freq: 400 display_id: None display_ncols: 4 display_port: 8097 display_server: http://localhost/ display_winsize: 256 easy_label: experiment_name epoch: latest epoch_count: 1 evaluation_freq: 5000 flip_equivariance: False gan_mode: nonsaturating gpu_ids: 0 init_gain: 0.02 init_type: xavier input_nc: 3 isTrain: True [default: None]
lambda_GAN: 1.0 lambda_NCE: 4.0 lambda_R1: 1.0 lambda_identity: 1.0 lbd: 0.01 load_size: 1024 lr: 0.002 lr_decay_iters: 50 lr_policy: linear max_dataset_size: inf model: sinmcl [default: cut]
n_epochs: 8 n_epochs_decay: 8 n_layers_D: 3 name: UAV_sinmcl [default: experiment_name] nce_T: 0.07 nce_idt: True nce_includes_all_negatives_from_minibatch: True nce_layers: 0,2,4 ndf: 8 netD: stylegan2 netF: mlp_sample netF_nc: 256 netG: stylegan2 ngf: 10 no_antialias: False no_antialias_up: False no_dropout: True no_flip: False no_html: False normD: instance normG: instance num_patches: 1 num_threads: 4 output_nc: 3 phase: train pool_size: 0 preprocess: zoom_and_patch pretrained_name: None print_freq: 100 random_scale_max: 3.0 save_by_iter: False save_epoch_freq: 1 save_latest_freq: 20000 serial_batches: False stylegan2_G_num_downsampling: 1 suffix: temp: 0.1 update_html_freq: 1000 verbose: False ----------------- End ------------------- Image sizes (1920, 1080) and (1920, 1080) dataset [SingleImageDataset] was created model [SinMCLModel] was created The number of training images = 100000 Setting up a new session... create web directory ./checkpoints\UAV_sinmcl\web... ---------- Networks initialized ------------- [Network G] Total number of parameters : 0.192 M [Network F] Total number of parameters : 0.219 M [Network D] Total number of parameters : 5.895 M

(epoch: 1, iters: 400, time: 0.111, data: 12.968) G_GAN: nan D_real: nan D_fake: nan G: nan NCE: nan mcl: nan NCE_Y: nan D_R1: nan idt: nan

If you see the above parameters, even after changing like below for crop size 256, im getting nan values in training.

F.normalize(pred_fake.view(-1, 16)). F.normalize(self.pred_real.view(-1, 16))

gouayao commented 1 month ago

Please tell me the dataset you are using.

---- Replied Message ---- | From | Tushar @.> | | Date | 06/09/2024 09:23 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [gouayao/MCL] !! URGENT !! sinMCL error (Issue #2) |

(CUT) D:\rewa_tu\MCL-master>python train.py --dataroot ./datasets/UAV_Synthetic --name UAV_sinmcl --model sinmcl --gpu_ids 0 --crop_size 256 ----------------- Options --------------- IMCL_mode: IMCL batch_size: 16 beta1: 0.0 beta2: 0.99 checkpoints_dir: ./checkpoints continue_train: False crop_size: 256 [default: 64] dataroot: ./datasets/UAV_Synthetic [default: placeholder] dataset_mode: singleimage direction: AtoB display_env: main display_freq: 400 display_id: None display_ncols: 4 display_port: 8097 display_server: http://localhost/ display_winsize: 256 easy_label: experiment_name epoch: latest epoch_count: 1 evaluation_freq: 5000 flip_equivariance: False gan_mode: nonsaturating gpu_ids: 0 init_gain: 0.02 init_type: xavier input_nc: 3 isTrain: True [default: None] lambda_GAN: 1.0 lambda_NCE: 4.0 lambda_R1: 1.0 lambda_identity: 1.0 lbd: 0.01 load_size: 1024 lr: 0.002 lr_decay_iters: 50 lr_policy: linear max_dataset_size: inf model: sinmcl [default: cut] n_epochs: 8 n_epochs_decay: 8 n_layers_D: 3 name: UAV_sinmcl [default: experiment_name] nce_T: 0.07 nce_idt: True nce_includes_all_negatives_from_minibatch: True nce_layers: 0,2,4 ndf: 8 netD: stylegan2 netF: mlp_sample netF_nc: 256 netG: stylegan2 ngf: 10 no_antialias: False no_antialias_up: False no_dropout: True no_flip: False no_html: False normD: instance normG: instance num_patches: 1 num_threads: 4 output_nc: 3 phase: train pool_size: 0 preprocess: zoom_and_patch pretrained_name: None print_freq: 100 random_scale_max: 3.0 save_by_iter: False save_epoch_freq: 1 save_latest_freq: 20000 serial_batches: False stylegan2_G_num_downsampling: 1 suffix: temp: 0.1 update_html_freq: 1000 verbose: False ----------------- End ------------------- Image sizes (1920, 1080) and (1920, 1080) dataset [SingleImageDataset] was created model [SinMCLModel] was created The number of training images = 100000 Setting up a new session... create web directory ./checkpoints\UAV_sinmcl\web... ---------- Networks initialized ------------- [Network G] Total number of parameters : 0.192 M [Network F] Total number of parameters : 0.219 M [Network D] Total number of parameters : 5.895 M

(epoch: 1, iters: 400, time: 0.111, data: 12.968) G_GAN: nan D_real: nan D_fake: nan G: nan NCE: nan mcl: nan NCE_Y: nan D_R1: nan idt: nan

If you see the above parameters, even after changing like below for crop size 256, im getting nan values in training.

F.normalize(pred_fake.view(-1, 16)). F.normalize(self.pred_real.view(-1, 16))

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

tusharrewatkar commented 1 month ago

Im using my own dataset, but its still same error with single_image_monet_etretat, nan in training.

Try to rerun this repo with single_monet dataset with model sinmcl and F.normalize(pred_fake.view(-1, 16)). F.normalize(self.pred_real.view(-1, 16))

tusharrewatkar commented 1 month ago

Got any update on this?

One more thing, the discriminator output layer should always be equal to batch size. If I put batch size 1, then discriminator output layer will be F.normalize(self.pred_real.view(-1, 1)).

Im not sure how discriminator output layer is dependent on input image like you mentioned it before.

gouayao commented 1 month ago

I'm sorry. I'm celebrating the Loong Boat Festival. I'll get back to you after the holiday. Regarding the latest issue, Figure 2 in the paper may be helpful for you.

---- Replied Message ---- | From | Tushar @.> | | Date | 06/10/2024 18:18 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [gouayao/MCL] !! URGENT !! sinMCL error (Issue #2) |

Got any update on this?

One more thing, the discriminator output layer should always be equal to batch size. If I put batch size 1, then discriminator output layer will be F.normalize(self.pred_real.view(-1, 1)).

Im not sure how discriminator output layer is dependent on input image like you mentioned it before.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

gouayao commented 1 month ago

Sorry for replying so late. We have found that the following parameters may be helpful to you.

loss_mcl

mcl_fake = F.normalize(pred_fake.view(-1, 4)) mcl_real = F.normalize(self.pred_real.view(-1, 4))

At 2024-06-10 18:18:37, "Tushar Rewatkar" @.***> wrote:

Got any update on this?

One more thing, the discriminator output layer should always be equal to batch size. If I put batch size 1, then discriminator output layer will be F.normalize(self.pred_real.view(-1, 1)).

Im not sure how discriminator output layer is dependent on input image like you mentioned it before.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>