[IEEE Transactions on Medical Imaging/TMI] This repo is the official implementation of "LViT: Language meets Vision Transformer in Medical Image Segmentation"
I've seen papers that say the MoNuSeg training set has 30 images and the test set contains 14 images? But the training data downloaded from your link has 37 images? May I ask which case do the experiments in this paper fit?
I've seen papers that say the MoNuSeg training set has 30 images and the test set contains 14 images? But the training data downloaded from your link has 37 images? May I ask which case do the experiments in this paper fit?