Closed clhbbd closed 10 months ago
Thank you for your interest towards our work! We directly leverage the SwinUNETR-like architecture as our backbone and only change the encoder part. The final downsampling block is performed in self.encoder5 in network_backbone.py. Also, I have some adjustment of adapting the first skip connection in the figure, as the performance only have a subtle varies from 0.936 to 0.938 with FLARE dataset, but definitely you can give it a try.
Thank you very much for your reply. ok, I will try again.
Hi @clhbbd,
Have you tried it? Can you tell me the results and whether there are any significant variations?
Upon comparing the network architecture in this article with the network diagram formed by the code, I noticed significant differences. For example, the code includes only four instances of downsampling, four instances of upsampling, and four instances of skip connections, which do not align with the network diagram provided in the article. I apologize for raising this question and I would appreciate it if you could reply.