Closed houqian2180320171 closed 4 years ago
Hi,I didn't find a link to the paper in the place you said. Can you re-issue the paper link?Thank you very much!
------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上6:07 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)
It's listed on the front page's README.md
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
In the table: https://github.com/LoSealL/VideoSuperResolution#network-list-and-reference-updating In col Published, each is a link to the paper.
Hi,I have already opened the link you just sent. Can you tell me which model it corresponds to? I want to look at the paper first, then look at the specific code. Thank you very much!
------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上7:21 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)
In the table: https://github.com/LoSealL/VideoSuperResolution#network-list-and-reference-updating In col Published, each is a link to the paper.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
For example:
Model | Published | Code* | VSR (TF)** | VSR (Torch) | Keywords | Pretrained |
---|---|---|---|---|---|---|
SRCNN | ECCV14 | -, Keras | Y | Y | Kaiming | √ |
This means model SRCNN is originally from ECCV2014, and you can click ECCV14 to open the link to the paper of SRCNN.
Thank you very much
------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上7:33 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)
For example: Model Published Code* VSR (TF)** VSR (Torch) Keywords Pretrained SRCNN ECCV14 -, Keras Y Y Kaiming √
This means model SRCNN is originally from ECCV2014, and you can click ECCV14 to open the link to the paper of SRCNN.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
Hi,Can you recommend any papers about video super-resolution?
------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上7:33 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)
For example: Model Published Code* VSR (TF)** VSR (Torch) Keywords Pretrained SRCNN ECCV14 -, Keras Y Y Kaiming √
This means model SRCNN is originally from ECCV2014, and you can click ECCV14 to open the link to the paper of SRCNN.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
EDVR, DUF and FRVSR are all good to read
It's listed on the front page's README.md