Closed xmlyqing00 closed 4 years ago
Thanks for the author's reply. They are equivalent.
Thanks for the author's reply. They are equivalent.
why are they equivalent?
@pixelsmaker It is because convolution is a linear operation. Thus, applying a single convolution after concatenating all the inputs are exactly equivalent to summing outputs from separate convolution applied to each input.
Hi,
Thanks for your outstanding model and well implementation. I have a question about memory encoder. In the class
Encoder_M
, you sum up the frame and the mask at the very beginning:However, it is confusing that in your paper, you say
Could you explain this difference or talk more about the intuition behind your implementation? Thanks in advance.