Closed guozixunnicolas closed 5 years ago
It is at the 5 channel output stage (there are 4 lines and 1 background, so there are 5 channels.)
OKay:) How about the feedfoward time in that case? Since I have only one GPU at hand, the GPU runs out memory if I implement message passing at the final stage. So I assume it will take lots of memory and slow down the feed forward time?
Applying SCNN at the output requires much less memory than applying to the 128 channel top hidden layer. So you can give it a try. Yes it would slow down the feed forward time, but I can't recall the exact running time.
Thank you:)
Hello, From the paper, there's a comparison between message passing at top hidden layer and at output layer.
May I know where in the output layer did you implement SCNN. Is it at the 3 channel output stage?
Best,
ZX