xinntao / EDVR

Winning Solution in NTIRE19 Challenges on Video Restoration and Enhancement (CVPR19 Workshops) - Video Restoration with Enhanced Deformable Convolutional Networks. EDVR has been merged into BasicSR and this repo is a mirror of BasicSR.
https://github.com/xinntao/BasicSR
1.49k stars 318 forks source link

some output frames show colorful distortions, especially in the subtitle area. #22

Open yinnhao opened 5 years ago

yinnhao commented 5 years ago

image

xinntao commented 5 years ago

Oh, it is a severe artifact. Do these artifacts appear when testing with Vid4 or REDS dataset?

It may raise from the different distributions of the datasets. But they are usually related to BN layers. However, there is no BN layer in the EDVR model.

yinnhao commented 5 years ago

Oh, it is a severe artifact. Do these artifacts appear when testing with Vid4 or REDS dataset?

It may raise from the different distributions of the datasets. But they are usually related to BN layers. However, there is no BN layer in the EDVR model.

Artifacts doesn't appear in Vid4 or REDS dataset, and it mostly appears in unnatural scenes.

xinntao commented 5 years ago

It is an interesting problem. It is probably raised from the different distributions of the training data and testing data.

yinnhao commented 5 years ago

It is an interesting problem. It is probably raised from the different distributions of the training data and testing data.

Hi, xinntao. I found that this artifacts become more severe when the offset mean is larger than 100. Can you give me some advice about how to guarantee the value of the offset when training? I found increasing input image size (deblur task) can alleviate this situation to some extent.

xinntao commented 5 years ago

If the offset mean is large than 100, the offsets must be meaningless. This is an issue of DCN for alignment - unstable training. Usually, we can finetune a pre-trained model with a smaller learning rate for those conv layers that predicts the offsets.

Sometimes, the too large offset mean occurs occasionally. In this case, we just stop it and resume it from a model whose offset prediction is normal.

yinnhao commented 5 years ago

@xinntao I solved this problem by finetuning your pretrained model using 'train_EDVR_M.yml'. But what surprised me was that I only used one video during the finetune process, monitoring the artifacts slowly disappearing. Then the new model work well on 99% video in my dataset without artifacts( too large offset mean is basically not present), although these video content vary greatly. So I have two questions:

Do you think this is because the model learned how to understand unnatural scene in the the finetune process with only one video? (I found out that noise is mostly in unnatural areas.) The TSA module is behind the DCN module, so finutuning TSA influence DCN by back propagation? I can't understand why is the TSA module so powerful?

Mukosame commented 5 years ago

+1, same artifacts too.

xinntao commented 5 years ago

@hahahaprince 1) I think so. Before the finetuning process, the pre-trained EDVR network has not seen these data and thus produces artifacts. Although with only one video during finetuning, the model learns to deal with these scenes. 2) Did you perform the fine-tuning on all the parameters? If you use the train_EDVR_M.yml, it will fine tune all the parameters.

@Mukosame You can finetune the pre-trained model using your own data, as @hahahaprince does.

sjscotti commented 4 years ago

@hahahaprince I am using the EDVR_REDS_deblurcomp_L.pth model and I am getting similar artifacts in some of the videos I am processing with EDVR. Are you doing blur_comp processing too, and if so, would you be willing to share your modified model?

vlee-harmonicinc commented 4 years ago

I also encounter this artifact without offset mean > 100. How to visualize Ablation on the PCD or TSA like figure 8 and figure 9 in the paper?

I visualized the offset by displaying the max offset of all channels (8 groups x3x3 kernel size) in ModulatedDeformConvPack relative to the width and height of current layer edit: My previous code of parsing offset xy coordinates was incorrect. I checked the cuda code and guessed the offset format should be deformable group, kernel 3, 3, x,y, H, W (please correct me if I misunderstand the offset)

        if VISUALIZE_OFFSET:
            offset_x = torch.max(torch.abs(offset_view[:,:,:,0,:,:]).view([-1, H, W]), dim=0)[0]/H
            offset_y = torch.max(torch.abs(offset_view[:,:,:,1,:,:]).view([-1, H, W]), dim=0)[0]/W
            padding = torch.zeros([H,W], dtype=offset.dtype).to(offset.get_device())
            offset_tensor_img = torch.stack([offset_x, padding, offset_y])
            next_index = len(glob.glob(os.path.join(self.offset_output_folder, '*.png')))
            torchvision.utils.save_image(offset_tensor_img, os.path.join(self.offset_output_folder, f'{next_index:05d}.png'))

image image image It seems that it is introduce by offset (even thought the mean of offset is below 1, the offset of specific pixel region or one deformable group is high. According to NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study, REDS datasets synthesize motion blur via increasing the frame rate to virtual 1920 fps by recursively interpolating the frames with CNN. It seems quite complex, and I haven't high-quality version of target video for training. Is it possible to fine-tune / transfer pre-trained model from general unpaired datasets? ----------update-------- I'm fine-tuning EDVR_REDS_deblurcomp_L model. I cannot stop and resume reply on the offset warning since it keep showing from the begining. I saved val_images, it show something like below after 55000 iter (still epoch 1). image Is it possible that the problem happen when CosineAnnealingLR_Restart restart?