Closed kirilllzaitsev closed 1 year ago
The output of the layer "enc3" is used for 2 purposes. The first one is as an input to the "bottleneck" layer, the second one is for concatenation before the "dec3". Since the output channel of it is 64, we need to have a write_gap for the channel-wise concatenation. However, we did not define write_gap to the enc3 directly as it is also used by the bottleneck layer. So, we defined a passthrough layer just to take the data without a gap and write it with write_gap=1 for the concatenation purpose.
This issue has been marked stale because it has been open for over 30 days with no activity. It will be closed automatically in 10 days unless a comment is added or the "Stale" label is removed.
Based on the Camvid example and the corresponding camvid-unet-large.yaml, having a Unet with 3 concatenation operations your implementation suggests using a single passthrough layer at the bottleneck:
Could you explain the reasoning behind not adding passthrough layers to do every concatenation in this network? In which cases do I need to use camvid-unet-large-fakept.yaml & izer/add_fake_passthrough.py instead?
Having the following Unet definition for a regression task, could you help to understand why multiple passthrough layers won't allow it to properly run inference: