HUANGLIZI / LViT

[IEEE Transactions on Medical Imaging/TMI] This repo is the official implementation of "LViT: Language meets Vision Transformer in Medical Image Segmentation"
MIT License
283 stars 26 forks source link

About the MoNuSeg dataset. #26

Closed windygooo closed 12 months ago

windygooo commented 12 months ago

I've seen papers that say the MoNuSeg training set has 30 images and the test set contains 14 images? But the training data downloaded from your link has 37 images? May I ask which case do the experiments in this paper fit?