LoSealL / VideoSuperResolution

A collection of state-of-the-art video or single-image super-resolution architectures, reimplemented in tensorflow.
MIT License
1.61k stars 295 forks source link

hi,Which paper is the corresponding paper for this code? #89

Closed houqian2180320171 closed 4 years ago

LoSealL commented 4 years ago

It's listed on the front page's README.md

houqian2180320171 commented 4 years ago

Hi,I didn't find a link to the paper in the place you said. Can you re-issue the paper link?Thank you very much!

------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上6:07 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)

It's listed on the front page's README.md

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

LoSealL commented 4 years ago

In the table: https://github.com/LoSealL/VideoSuperResolution#network-list-and-reference-updating In col Published, each is a link to the paper.

houqian2180320171 commented 4 years ago

Hi,I have already opened the link you just sent. Can you tell me which model it corresponds to? I want to look at the paper first, then look at the specific code. Thank you very much!

------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上7:21 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)

In the table: https://github.com/LoSealL/VideoSuperResolution#network-list-and-reference-updating In col Published, each is a link to the paper.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

LoSealL commented 4 years ago

For example:

Model Published Code* VSR (TF)** VSR (Torch) Keywords Pretrained
SRCNN ECCV14 -, Keras Y Y Kaiming

This means model SRCNN is originally from ECCV2014, and you can click ECCV14 to open the link to the paper of SRCNN.

houqian2180320171 commented 4 years ago

Thank you very much

------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上7:33 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)

For example: Model Published Code* VSR (TF)** VSR (Torch) Keywords Pretrained SRCNN ECCV14 -, Keras Y Y Kaiming √

This means model SRCNN is originally from ECCV2014, and you can click ECCV14 to open the link to the paper of SRCNN.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

houqian2180320171 commented 4 years ago

Hi,Can you recommend any papers about video super-resolution?

------------------ 原始邮件 ------------------ 发件人: "Tang, Wenyi"notifications@github.com; 发送时间: 2019年10月8日(星期二) 晚上7:33 收件人: "LoSealL/VideoSuperResolution"VideoSuperResolution@noreply.github.com; 抄送: "上善若水"2570725376@qq.com;"Author"author@noreply.github.com; 主题: Re: [LoSealL/VideoSuperResolution] hi,Which paper is thecorresponding paper for this code? (#89)

For example: Model Published Code* VSR (TF)** VSR (Torch) Keywords Pretrained SRCNN ECCV14 -, Keras Y Y Kaiming √

This means model SRCNN is originally from ECCV2014, and you can click ECCV14 to open the link to the paper of SRCNN.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.

LoSealL commented 4 years ago

EDVR, DUF and FRVSR are all good to read