XPixelGroup / BasicSR

Open Source Image and Video Restoration Toolbox for Super-resolution, Denoise, Deblurring, etc. Currently, it includes EDSR, RCAN, SRResNet, SRGAN, ESRGAN, EDVR, BasicVSR, SwinIR, ECBSR, etc. Also support StyleGAN2, DFDNet.
https://basicsr.readthedocs.io/en/latest/
Apache License 2.0
6.67k stars 1.17k forks source link

can not find SRResNet_bicx4_in3nf64nb16.pth #161

Open Marshall-yao opened 5 years ago

Marshall-yao commented 5 years ago

Hi, Xintao. Thanks for your wonderful work about ESRGAN.

When i want to reproduce SRGAN, i find there is not a SRResNet_bicx4_in3nf64nb16.pth in this GihHub. Could you seng me it to my mailbox luyao095@gmail.com?

I think python train.py -opt options/train/train_SRGAN.json is not the training command about SRGAN. It is python SRGAN_model.py -opt options/train/train_SRGAN.json ,right?

Thanks a lot.

xinntao commented 5 years ago

Since we have updated the repo to a new version, you can use MSRResNetx4.pth in https://drive.google.com/drive/folders/1cw-dEpAdwpuQdEC7WJhITwjrn2Tr-hqd with the updated codebase.

No, all the configurations should be set in the config file and you still need to use python train.py ... By the way, we have updated the BasicSR repo, which uses YAML config. You may want to update it.

Marshall-yao commented 5 years ago

Hi,xintao. I am extremely interested in your wonderful work of EDVR and i really want to reproduce your work. However, i only have a GPU of 2080 ti. Could you tell me how long it may take when the code is trained on this GPU and what about the result?

Thank you Best regards

yaolu yao luyao095@gmail.com 于2019年6月22日周六 上午9:22写道:

Thank you very much, Xintao. I will follow your wonderful work.

Best regards, Lu Yao

Xintao notifications@github.com 于2019年6月21日周五 上午12:34写道:

Since we have updated the repo to a new version, you can use MSRResNetx4.pth in https://drive.google.com/drive/folders/1cw-dEpAdwpuQdEC7WJhITwjrn2Tr-hqd with the updated codebase.

No, all the configurations should be set in the config file and you still need to use python train.py ... By the way, we have updated the BasicSR repo, which uses YAML config. You may want to update it.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOHNMFEFBSSJEPK2IKLP3OWSLA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYF6J7Q#issuecomment-504095998, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOCYUDRJJVVYR2RKO53P3OWSLANCNFSM4HZQQSKQ .

xinntao commented 5 years ago
  1. We use eight GPUs to train our network.
  2. You can find examples of training logs here and the results here.
Marshall-yao commented 5 years ago

Thanaks very much for your patient reply. 1) I have seen that there is a command of single GPU training in run_script.sh of your work EDVR. But i have not found train_SRResNet.yml. Could you send me it ? Have you ever trained on one GPU? And what about the test result on vid4?

2) Since dataroot_GT: /home/xtwang/datasets/REDS/train_sharp_wval.lmdb, which file is the dataroot_GT address set in?

Best regards

Xintao notifications@github.com 于2019年6月28日周五 下午5:07写道:

  1. We use eight GPUs to train our network.
  2. You can find examples of training logs here https://github.com/xinntao/EDVR/wiki/Testing-and-Training and the results here https://github.com/xinntao/EDVR/wiki/Model-Zoo.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOHEBPDRAFFVI5FFEJTP4XIFDA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYZQ7HA#issuecomment-506662812, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIODPTSRRPKIINOKQSMTP4XIFDANCNFSM4HZQQSKQ .

Marshall-yao commented 5 years ago

Hi ,xintao.Excuse me, 1)

yaolu yao luyao095@gmail.com 于2019年6月29日周六 上午10:16写道:

Thanaks very much for your patient reply. 1) I have seen that there is a command of single GPU training in run_script.sh of your work EDVR. But i have not found train_SRResNet.yml. Could you send me it ? Have you ever trained on one GPU? And what about the test result on vid4?

2) Since dataroot_GT: /home/xtwang/datasets/REDS/train_sharp_wval.lmdb, which file is the dataroot_GT address set in?

Best regards

Xintao notifications@github.com 于2019年6月28日周五 下午5:07写道:

  1. We use eight GPUs to train our network.
  2. You can find examples of training logs here https://github.com/xinntao/EDVR/wiki/Testing-and-Training and the results here https://github.com/xinntao/EDVR/wiki/Model-Zoo.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOHEBPDRAFFVI5FFEJTP4XIFDA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYZQ7HA#issuecomment-506662812, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIODPTSRRPKIINOKQSMTP4XIFDANCNFSM4HZQQSKQ .

Marshall-yao commented 5 years ago

Hi ,xintao. Excuse me, i want to consult you with some questions.

1) vimeo.yml Since there is not yml file about vimeo dataset under options/train/ , thus i think i need to change name,mode , dataroot_GT,dataroot_LQ in REDS_M.yml , right?

Besides, i guess vimeo_M.yml is also need woTSA pretrained model, right? If so ,could you send me this pretrained model? In addition , you can add EDVR_Vimeo90K_SR_M test results to your GitHub.

2) pretrained model in train_EDVR_M.yml Because training for REDS_SR_M requires woTSA 600000_G.pth pretrained model ,so could you send me this model ? You can add this model to your GitHub.

I am looking forward to hearing from you. Best regards

yaolu yao luyao095@gmail.com 于2019年6月29日周六 下午3:44写道:

Hi ,xintao.Excuse me, 1)

yaolu yao luyao095@gmail.com 于2019年6月29日周六 上午10:16写道:

Thanaks very much for your patient reply. 1) I have seen that there is a command of single GPU training in run_script.sh of your work EDVR. But i have not found train_SRResNet.yml. Could you send me it ? Have you ever trained on one GPU? And what about the test result on vid4?

2) Since dataroot_GT: /home/xtwang/datasets/REDS/train_sharp_wval.lmdb, which file is the dataroot_GT address set in?

Best regards

Xintao notifications@github.com 于2019年6月28日周五 下午5:07写道:

  1. We use eight GPUs to train our network.
  2. You can find examples of training logs here https://github.com/xinntao/EDVR/wiki/Testing-and-Training and the results here https://github.com/xinntao/EDVR/wiki/Model-Zoo.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOHEBPDRAFFVI5FFEJTP4XIFDA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYZQ7HA#issuecomment-506662812, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIODPTSRRPKIINOKQSMTP4XIFDANCNFSM4HZQQSKQ .

Marshall-yao commented 5 years ago

Hi ,xintao.Excuse me ,i want to consult you with a question about train_sharp_wval.lmdb in EDVR.

I used the create_lmdb_mp.py to generate train_sharp_wval.lmdb,but it was killed for memory shortage. Could you give me the train_sharp_wval.lmdb, vimeo90k_train_GT.lmdb and vimeo90k_train_LR7frames.lmdb ?

Thanks a lot. Looking forward to your early reply.

Best regards,

yaolu yao luyao095@gmail.com 于2019年6月29日周六 下午4:18写道:

Hi ,xintao. Excuse me, i want to consult you with some questions.

1) vimeo.yml Since there is not yml file about vimeo dataset under options/train/ , thus i think i need to change name,mode , dataroot_GT,dataroot_LQ in REDS_M.yml , right?

Besides, i guess vimeo_M.yml is also need woTSA pretrained model, right? If so ,could you send me this pretrained model? In addition , you can add EDVR_Vimeo90K_SR_M test results to your GitHub.

2) pretrained model in train_EDVR_M.yml Because training for REDS_SR_M requires woTSA 600000_G.pth pretrained model ,so could you send me this model ? You can add this model to your GitHub.

I am looking forward to hearing from you. Best regards

yaolu yao luyao095@gmail.com 于2019年6月29日周六 下午3:44写道:

Hi ,xintao.Excuse me, 1)

yaolu yao luyao095@gmail.com 于2019年6月29日周六 上午10:16写道:

Thanaks very much for your patient reply. 1) I have seen that there is a command of single GPU training in run_script.sh of your work EDVR. But i have not found train_SRResNet.yml. Could you send me it ? Have you ever trained on one GPU? And what about the test result on vid4?

2) Since dataroot_GT: /home/xtwang/datasets/REDS/train_sharp_wval.lmdb, which file is the dataroot_GT address set in?

Best regards

Xintao notifications@github.com 于2019年6月28日周五 下午5:07写道:

  1. We use eight GPUs to train our network.
  2. You can find examples of training logs here https://github.com/xinntao/EDVR/wiki/Testing-and-Training and the results here https://github.com/xinntao/EDVR/wiki/Model-Zoo.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOHEBPDRAFFVI5FFEJTP4XIFDA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODYZQ7HA#issuecomment-506662812, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIODPTSRRPKIINOKQSMTP4XIFDANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

They are very large files and we have not found such a place that could store these files. I will update a new version of create_lmdb_mp.py to support limited memory consumption tmr.

Marshall-yao commented 5 years ago

Hi ,xintao. I am much grateful for your patient reply.

If it is possible ,you can pass the train_sharp_wval.lmdb, vimeo90k_train_GT.lmdb and vimeo90k_train_LR7frames.lmdb to me using Google Mail or Baidu network disk. I am really eager to get these files as soon as possible.

I am so sorry for bringing you inconvenience.

Best regards

Xintao notifications@github.com 于2019年7月2日周二 下午8:20写道:

They are very large files and we have not found such a place that could store these files. I will update a new version of create_lmdb_mp.py to support limited memory consumption tmr.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOCZQORYANRJ7Z36PSTP5NBZ5A5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZBCSTA#issuecomment-507652428, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOE7REKLYNLLIVVXPGTP5NBZ5ANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

My Google drive does not have such a large space. I can upload it to Baidu drive. Once finished, I will let you know.

Marshall-yao commented 5 years ago

Thank you, thank you. xintao. You are so kind. Thanks. I want to consult you with another three questions.

1)What is your recently work ?

2)About EDVR's PCD, TSA two modules, I feel them particularly novel and very effective. I want to ask where is your inspiration? I know you have borrowed the idea of deformable conv from TDAN.

Besides, what is the fps of EDVR ? I think you can show a demo about your restoration result on GitHub.

3) Regarding the optical flow, alignment, and fusion of video, if you can recommand some documents worth reading ?

Best regards

Xintao notifications@github.com 于2019年7月2日周二 下午9:16写道:

My Google drive does not have such a large space. I can upload it to Baidu drive. Once finished, I will let you know.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOARR763G3QBILOTC7TP5NILVA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZBHNHI#issuecomment-507672221, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOARHHAXUBHAGRNXRQTP5NILVANCNFSM4HZQQSKQ .

Marshall-yao commented 5 years ago

Hi,xintao. Thank you for uploading files to Baidu drive. Thank you for your share.

1) May i know whcih one have been successfully uploaded ? I can firstly use train_sharp_bicubic_wval.lmdb.

2) What is the meaning of --nproc_per_node=8 and --master_port=4321 in training command ? If i train with one GPU , should i change --nproc_per_node=8 with node=1 ?

Thanks a lot.

yaolu yao luyao095@gmail.com 于2019年7月2日周二 下午10:03写道:

Thank you, thank you. xintao. You are so kind. Thanks. I want to consult you with another three questions.

1)What is your recently work ?

2)About EDVR's PCD, TSA two modules, I feel them particularly novel and very effective. I want to ask where is your inspiration? I know you have borrowed the idea of deformable conv from TDAN.

Besides, what is the fps of EDVR ? I think you can show a demo about your restoration result on GitHub.

3) Regarding the optical flow, alignment, and fusion of video, if you can recommand some documents worth reading ?

Best regards

Xintao notifications@github.com 于2019年7月2日周二 下午9:16写道:

My Google drive does not have such a large space. I can upload it to Baidu drive. Once finished, I will let you know.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOARR763G3QBILOTC7TP5NILVA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZBHNHI#issuecomment-507672221, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOARHHAXUBHAGRNXRQTP5NILVANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

@yaolugithub , I have updated the create_lmdb_mp.py. You can use it to generate lmdb files.

You can find relevant info in https://pytorch.org/docs/stable/distributed.html Note that training with one GPU will impact performance.

Marshall-yao commented 5 years ago

Hi ,xintao. Thanks very much for updating the code. Your coding ablilty is very powerful. Thanks for your guidance.

There is no yml files about vimeo90K dataset ,so i think you can also upload relevant files to train code with this dataset.

Xintao notifications@github.com 于2019年7月3日周三 下午8:31写道:

@yaolugithub https://github.com/yaolugithub , I have updated the create_lmdb_mp.py. You can use it to generate lmdb files.

You can find relevant info in https://pytorch.org/docs/stable/distributed.html Note that training with one GPU will impact performance.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOAGKBNTCSAOMK23D4TP5SLY5A5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZEJGRQ#issuecomment-508072774, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOHOWOVD4Q7SMMM7WW3P5SLY5ANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

It is easy to modify the existing yml configuration file to meet the vimeo90k training.

Marshall-yao commented 5 years ago

Thanks very much for your guidance. Now i know how to modify the exisiting yml.

Besides, theres is a command of training with one GPU in run_script.sh. However, there is no train_SRResNet.yml.

Thus, i have to train the code with distributing training.

Could you tell me what i need to modify to train the code with one or two GPUs ? Thanks a lot. Looking forward to hearing from you.

Best regards

Xintao notifications@github.com 于2019年7月4日周四 下午8:42写道:

It is easy to modify the existing yml configuration file to meet the vimeo90k training.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOFS25BLNKHEQ2K5GGLP5XV43A5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZHKABA#issuecomment-508469252, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOGNWYZC4FPOQCISLPTP5XV43ANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

What did you mean by However, there is no train_SRResNet.yml.? One GPU: python train.py -opt options/train/train_SRResNet.yml Two GPUs: python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 train.py -opt options/train/train_SRResNet.yml --launcher pytorch

Marshall-yao commented 5 years ago

Thank you very much for your reply ,xintao.

I mean that i did not find train_SRResNet.yml on your GitHub. Have you uploaded this file ?

Besides,i trained the code with command of python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 train.py -opt options/train/train_EDVR_M.yml --launcher pytorch on two GPUs using the pretrained model of train_EDVR_woTSA_M.yml.

So , i need to retrain the code with train_SRResNet.yml when i have two GPUs ,rigtht ?

Xintao notifications@github.com 于2019年7月7日周日 下午8:48写道:

What did you mean by However, there is no train_SRResNet.yml.? One GPU: python train.py -opt options/train/train_SRResNet.yml Two GPUs: python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 train.py -opt options/train/train_SRResNet.yml --launcher pytorch

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOBCAKOW3HJTWOOBHODP6HQ2HA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZLK4FY#issuecomment-508997143, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOANC75XXD2SLPA2QVTP6HQ2HANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

The train_SRResNet.yml file: https://github.com/xinntao/BasicSR/blob/master/codes/options/train/train_SRResNet.yml

I do not understand your second questions. Why do you need to train SRResNet when you train EDVR?

Marshall-yao commented 5 years ago

My second question is about the command of two GPUs training. In the last email ,you mentioned that the training command when training on two gpus. Two GPUs: python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 train.py -opt options/train/train_SRResNet.yml --launcher pytorch

However ,i trained with the command of python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 train.py -opt options/train/ train_EDVR_M.yml --launcher pytorch on two GPUs using the pretrained model of train_EDVR_woTSA_M.yml.

Is that correct?

Besides, in my training , l_pix is often e+05 which is different from your training logg. Is this correct?

Best regards,

Xintao notifications@github.com 于2019年7月11日周四 下午10:11写道:

The train_SRResNet.yml file: https://github.com/xinntao/BasicSR/blob/master/codes/options/train/train_SRResNet.yml

I do not understand your second questions. Why do you need to train SRResNet when you train EDVR?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOH6V5TUQYS4WXX7K5TP645SDA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZW2NNI#issuecomment-510502581, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOD5VONNHV4HRZTXUCTP645SDANCNFSM4HZQQSKQ .

Marshall-yao commented 5 years ago

Hi,xintao.Excuse me, I have seen your EDVR code that is excellent project . Thus ,i may want to consult you with code structure. Could you share some experience of code construction process about your project ?

Best regards,

yaolu yao luyao095@gmail.com 于2019年7月11日周四 下午10:32写道:

My second question is about the command of two GPUs training. In the last email ,you mentioned that the training command when training on two gpus. Two GPUs: python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 train.py -opt options/train/train_SRResNet.yml --launcher pytorch

However ,i trained with the command of python -m torch.distributed.launch --nproc_per_node=2 --master_port=4321 train.py -opt options/train/train_EDVR_M.yml --launcher pytorch on two GPUs using the pretrained model of train_EDVR_woTSA_M.yml.

Is that correct?

Besides, in my training , l_pix is often e+05 which is different from your training logg. Is this correct?

Best regards,

Xintao notifications@github.com 于2019年7月11日周四 下午10:11写道:

The train_SRResNet.yml file: https://github.com/xinntao/BasicSR/blob/master/codes/options/train/train_SRResNet.yml

I do not understand your second questions. Why do you need to train SRResNet when you train EDVR?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOH6V5TUQYS4WXX7K5TP645SDA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZW2NNI#issuecomment-510502581, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOD5VONNHV4HRZTXUCTP645SDANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

The code structure is actually easy-to-understand. It is advised to directly read the codes by yourself :-)

Marshall-yao commented 5 years ago

1) Thanks very much,xintao.

2) Quick question: reducing training time When i reproducing your work EDVR, it need to be trained about 10 days. Thus ,i want to reducing training time.

i ) reduce training set For example, i choose the REDS_train_sharp_part1.zip which has 80 video sequences as train set. Could you give some suggestions about what to modify with the original code?

ii) Reduce the complexity of the model e.g: reduce front_back, back_rb numers (10 to 5) ; reduce N_frames( 5 to 3); reduce nf (64 to 32);

Do you have some other suggestions about reducing training time ?

Xintao notifications@github.com 于2019年7月16日周二 下午1:03写道:

The code structure is actually easy-to-understand. It is advised to directly read the codes by yourself :-)

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOGWAJ3PO7NQMKEGHHTP7VJDHA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZ7WO7Q#issuecomment-511666046, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOA2C3M7PM3NWOFIDNLP7VJDHANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

2) You mean ten days for the EDVR-L model? i) limiting the training set is not a good idea. You can use the reduced training iterations but with more training data. ii) You can have a try.

Marshall-yao commented 5 years ago

Thanks ,xintao. 1) I mean the EDVR-REDS-M model. I will have a try. Is EDVR--Vimeo90K- M training longer than EDVR-REDS-M ? Because i want to reproduce a model with shorter training time.

2) Why did you calculate PSNR in RGB channels not Y channel for REDS dataset ?

Best regards,

You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOEAHIUOETNDROMYHGLP74UJRA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2EMS6A#issuecomment-512280952, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOBB72SIAY6MVEJ2NH3P74UJRANCNFSM4HZQQSKQ .

xinntao commented 5 years ago

1) You can short the training time by decreasing the training iterations, e.g, setting the niter: 600000 to niter: 450000 in the config file. We train the EDVR with the same training schemes for both the vimeo90k and REDS dataset. So they will have the same training time. 2) I think it is better to evaluate on RGB, because they are trained on RGB channels. And the metric for the competition is also for RGB channels.

Marshall-yao commented 5 years ago

Thanks very much,xintao. 1) Yes, i have seen frequently asked questions on your github. The avg_psnr curve is 001_EDVRwoTSA_scratch_lr4e-4_600k_REDS_LrCAR4S and there is marginal improvement (almost 0.01db).So,to set the niter from 600k to 450k is ok. But i want to know if it is also marginal improvement from 450k to 600k for training with train_EDVR_M.yml ?

2) Yes, i see.

Xintao notifications@github.com 于2019年7月20日周六 下午10:21写道:

  1. You can short the training time by decreasing the training iterations, e.g, setting the niter: 600000 to niter: 450000 in the config file. We train the EDVR with the same training schemes for both the vimeo90k and REDS dataset. So they will have the same training time.
  2. I think it is better to evaluate on RGB, because they are trained on RGB channels.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/xinntao/BasicSR/issues/161?email_source=notifications&email_token=AKYUIOFQ7PHMOQ5TAKO4K73QAMNQNA5CNFSM4HZQQSK2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2NPJPQ#issuecomment-513471678, or mute the thread https://github.com/notifications/unsubscribe-auth/AKYUIOCW75WC4IRPEIMHBHTQAMNQNANCNFSM4HZQQSKQ .

xinntao commented 5 years ago
  1. I attached the evaluation figure for train_EDVR_M.yml image
Marshall-yao commented 5 years ago

@xinntao Thanks so much for your patient reply.