Contrastive Language-Image Pre-training (CLIP) Driven Models and Partially Supervised Learning for Medical Image Segmentation
This issue is to discuss adding the CLIP-Driven Universal Model Features to MONAI.
Potential assignee: @tangy5
CLIP-Driven Universal Model
Key features
The implementation will bring several new feature as follows:
Universal Model: one model to detect and segment all abdominal organs and all types of tumors (Liver tumor, kidney tumor, Lung nodule, Pancreas tumor, hepatic vessel tumor, colon tumor).
Language model (CLIP) and text-driven embeddings boost medical image analysis.
Training Partial labelled datasets.
Incremental learning: Users can continue to train new segmentation classes using the current trained model without catastrophic forgetting.
⏳ Dataset: The Universal Model is trained with following datasets
Contrastive Language-Image Pre-training (CLIP) Driven Models and Partially Supervised Learning for Medical Image Segmentation
This issue is to discuss adding the CLIP-Driven Universal Model Features to MONAI.
Potential assignee: @tangy5
CLIP-Driven Universal Model
Key features
The implementation will bring several new feature as follows:
⏳ Dataset: The Universal Model is trained with following datasets
Implementation plans
More Details of the Feature Methodology:
Universal Model:
CLIP Driven and text-driven segmentor:
Partial Supervised Learning:
Incremental Leraning:
Detailed steps of implantation will provide after open discussion.
Welcome all suggestions and comments!
@ljwztc @MrGiovanni