Closed vicentowang closed 4 years ago
Nope, they did not provide detail for that. This project is essentially based on monodepth2, so many details and trick are adapted from it. btw, for PackResNetEncoder x = (input_image - 0.45) / 0.225, it's an input normalization to get numeric stable.
there still exist many conflicts with the paper , as I only read the encoder parts ,two issues: 1.BasicBlock
refer to the ResidualBlock ,i think, this should be conv -> ELU -> conv ->ELU -> conv-> ELU -> GroupNorm -> dropout 2. for i in range(1, blocks) layers.append(block(in_channels, out_channels)) ,cascade-ResidualBlock not mentioned in the paper. In my opinion though.
yeah basic residual block is not the same which after testing not influence the result.
for cascade residual block, paper mentioned, which you can see at Table 1 for network detail
yeah basic residual block is not the same which after testing not influence the result.
for cascade residual block, paper mentioned, which you can see at Table 1 for network detail
cascade residual block, paper mentioned table1 you mean this ? Each ResidualBlock is a sequence of 3 2D convolutional layers with K = 3/3/1 and ELU non-linearities, followed by GroupNorm with G = 16 and Dropout [41] of 0.5 in the final layer. I can't see from it. more detail ?
x2 means duplicate ResidualBlock 2 times.
thanks. got it
------------------ 原始邮件 ------------------ 发件人: "Fangget"<notifications@github.com>; 发送时间: 2020年1月16日(星期四) 晚上6:38 收件人: "FangGet/PackNet-SFM-PyTorch"<PackNet-SFM-PyTorch@noreply.github.com>; 抄送: "渚薰岚"<849529792@qq.com>; "Author"<author@noreply.github.com>; 主题: Re: [FangGet/PackNet-SFM-PyTorch] does it has the detail in the paper ? (#2)
x2 means duplicate ResidualBlock 2 times.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
In PackResNetEncoder x = (input_image - 0.45) / 0.225 image norm? why i don't see it in the paper? and more details I cannot find it from the paper.