This is the official pytorch implementation of our MICCAI 2023 paper "TPRO: Text-prompting-based Weakly Supervised Histopathology Tissue Segmentation".
Download LUAD-HistoSeg and BCSS-WSSS datasets and orgnize the directory sctructure in the following format:
data/
|--LUAD-HistoSeg
|--train
|--img
|--test
|--img
|--mask
|--valid
|--img
|--mask
|--BCSS-WSSS
|--train
|--img
|--test
|--img
|--mask
|--valid
|--img
|--mask
The ImageNet-1k pre-trained weights of vision encoder can be download from the official SegFormer implementation.
# LUAD-HistoSeg
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port 16732 train_cls.py --config ./work_dirs/luad/classification/config.yaml
# BCSS-WSSS
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port 16372 train_cls.py --config ./work_dirs/bcss/classification/config.yaml
# LUAD-HistoSeg
python evaluate_cls.py --dataset luad --model_path path/to/classification/model --save_dir ./work_dirs/luad/classification/predictions --split train
# BCSS-WSSS
python evaluate_cls.py --dataset bcss --model_path path/to/classification/model --save_dir ./work_dirs/bcss/classification/predictions --split train
# LUAD-HistoSeg
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port 16372 train_seg.py --config ./work_dirs/luad/segmentation/config.yaml
# BCSS-WSSS
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port 16372 train_seg.py --config ./work_dirs/bcss/segmentation/config.yaml