Code associated to the publication: Scaling self-supervised learning for histopathology with masked image modeling, A. Filiot et al., MedRxiv (2023). We publicly release Phikon 🚀
First off, truly impressive work and congratulations on securing first place in the Kaggle competition!
I have a question regarding the architecture of Chowder in the paper. The illustration indicates that tile embeddings (local descriptor) are processed through a 1-D conv layer before proceeding to the min-max layer:
In this particular context, the Conv1D layer is equivalent to a full-connected layer.
For convenience, it is implemented as a fully-connected (or MLP) layer.
Hi Team Owkin,
First off, truly impressive work and congratulations on securing first place in the Kaggle competition! I have a question regarding the architecture of Chowder in the paper. The illustration indicates that tile embeddings (local descriptor) are processed through a 1-D conv layer before proceeding to the min-max layer:
However, when I looked at the code implementation, I noticed that an MLP is used at this stage instead: https://github.com/owkin/HistoSSLscaling/blob/73f1d191b1d04d4b88307a9601c4fcdbf23b72fa/rl_benchmarks/models/slide_models/chowder.py#L129-L134 It would be great if it could be clarified whether the MLP has indeed replaced the 1-D conv layer, or if I might be overlooking something?