Open dzz416 opened 2 years ago
As you said in the article that the image correction method proposed in the Real SR article is used to correct the image frame by frame, so should my own image be corrected first and then use the network model to perform the super-resolution? Can you explain how to use the correction code in Real SR? it has 3 folder and has the same name ,i dont know how to use it to correct my own data. thanks for reply.
For running inference on your own data, you do not need to perform registration using codes in RealSR. You only need that if you want to create your own training data.
As you said in the article that the image correction method proposed in the Real SR article is used to correct the image frame by frame, so should my own image be corrected first and then use the network model to perform the super-resolution? Can you explain how to use the correction code in Real SR? it has 3 folder and has the same name ,i dont know how to use it to correct my own data. thanks for reply.
For running inference on your own data, you do not need to perform registration using codes in RealSR. You only need that if you want to create your own training data.
Okay, I still wonder how to use the matlab code in the distortion correction part of Real SR
As you said in the article that the image correction method proposed in the Real SR article is used to correct the image frame by frame, so should my own image be corrected first and then use the network model to perform the super-resolution? Can you explain how to use the correction code in Real SR? it has 3 folder and has the same name ,i dont know how to use it to correct my own data. thanks for reply.
For running inference on your own data, you do not need to perform registration using codes in RealSR. You only need that if you want to create your own training data.
I noticed that LP-KPN was used in RealSR to unwrap the image of the Y channel to estimate the blur kernel at different scales, while in your RealVSR, the Laplace transform was used to unwrap the image and designed it only at the end Different losses constitute the final loss. In this way, the network model used does not need to be changed. What I want to know is whether this can achieve the same effect? Or how to analyze the advantages and disadvantages of the two methods?
Maybe it will achieve similar effects. The Laplacian pyramid loss of our paper mainly aims to tackle the misalignment problem in terms of color/luminance. It also offers some help for displacement.
thanks a lot
I am trying to run train.py, using this command: CUDA_VISIBLE_DEVICES=1 python codes/train.py -opt opts/edvr_realvsr_split.yml
But the program did not report an error and did not start training, but stopped here:
21-12-03 10:17:52.792 - INFO: Random seed: 0 21-12-03 10:17:52.794 - INFO: Temporal augmentation interval list: [1], with random reverse is False. 21-12-03 10:17:52.794 - INFO: Using cache keys: /home1/dzzHD/RealVSR-main/keys/realvsr_keys.pkl 21-12-03 10:17:52.862 - INFO: Remove sequences: ['016', '018', '028', '029', '039', '044', '049', '088', '090', '108', '135', '157', '159', '169', '170', '211', '212', '223', '245', '246', '247', '250', '266', '276', '279', '289', '293', '295', '303', '317', '323', '344', '346', '360', '362', '364', '374', '407', '429', '431', '433', '449', '451', '458', '461', '471', '472', '484', '488', '492'] 21-12-03 10:17:52.863 - INFO: Dataset [RealVSRAllPairDataset - RealVSR_Train] is created. 21-12-03 10:17:52.863 - INFO: Number of train images: 22,500, iters: 1,407 21-12-03 10:17:52.863 - INFO: Total epochs needed: 107 for iters 150,000
and its using zero memory when its running
how to fix it?
and also i want to ask whats the differents between the split model and combine model?
how long does the training takes?
Maybe because you use a old PyTorch version (<1.7) and a modern GPU (NVIDIA 30 series)? The training time depends on many factors, such as the batchsize, iter num and the GPU you use. The split model means using decompostion based loss and the combine model means using usual CB loss.
Maybe because you use a old PyTorch version (<1.7) and a modern GPU (NVIDIA 30 series)? The training time depends on many factors, such as the batchsize, iter num and the GPU you use. The split model means using decompostion based loss and the combine model means using usual CB loss.
Sorry to disturb you again, can I add your social account such as WeChat or email? I want to ask you some questions. I found that you know the author of RealSR, can you help me ask for a social account of csjcai?
Maybe because you use a old PyTorch version (<1.7) and a modern GPU (NVIDIA 30 series)? The training time depends on many factors, such as the batchsize, iter num and the GPU you use. The split model means using decompostion based loss and the combine model means using usual CB loss.
Sorry to disturb you again, can I add your social account such as WeChat or email? I want to ask you some questions. I found that you know the author of RealSR, can you help me ask for a social account of csjcai?
You can contact me with the email on the paper. For the author of RealSR, you may find his email on the paper or on his website.
As you said in the article that the image correction method proposed in the Real SR article is used to correct the image frame by frame, so should my own image be corrected first and then use the network model to perform the super-resolution? Can you explain how to use the correction code in Real SR? it has 3 folder and has the same name ,i dont know how to use it to correct my own data. thanks for reply.