isabellapoles / LOTUS

Letting Osteocytes Teach SR-microCT Bone Lacunae Segmentation: A Feature Variation Distillation Method via Diffusion Denoising
4 stars 0 forks source link

πŸͺ· LOTUS

Introduction

This is the implementation for the MICCAI'24 paper "Letting Osteocytes Teach SR-microCT Bone Lacunae Segmentation: A Feature Variation Distillation Method via Diffusion Denoising".

Image

Data Preparation

We collect a new dataset, BonesAI, of H&E histopathologies and SR-microCT images of bone femoral heads. H&E histopathologies are decorated with 1343 osteocyte cell segmentations, while SR-microCT has ~32.1k lacunar structures. Given their interdependence, we learn lacunae segmentation from SR-microCTs while integrating osteocyte histopathological information. We have separated our in-house dataset at the patient level and maintained a training-validation-test set ratio of approximately 8:1:1. While we cannot disclose our dataset yet, we are working to facilitate its accessibility.

We also tested our strategy on the DeepLIIF dataset. Given the more complex analysis of DAPI + mpIF fluorescence images, we aim to aid their cell segmentation with knowledge transfer from its immunochemistry (IHC) complementary modality. We adopted the training-validation-test set ratio of the original dataset version.

We do not apply pre-processing to the BonesAI H&E histopathologies, while for SR-microCTs we followed the methodology outlined in [1]-[2]. In particular, we classify images into two sets based on gray level ranges: [0, 1] and [-1, 0], the latter originating from acquisition errors. Subsequently, a 3-step enhancement pipeline (1. normalization+clipping, 2. Otsu+K-means segmentation, 3. image masking) is applied to extract the bone image content.

For DeepLIIF images, we do not apply pre-processing to the IHC images, but we merged the DAPI and mpIF, considering the maximum pixel-wise value between the two.

We used data augmentation, including random crop, flip, rotation, contrast, saturation, and brightness changes. All the images are resized to 512Γ—512 before feeding into the network.

LOTUS                             # your WRK_DIR
.
β”œβ”€β”€ ...
β”œβ”€β”€data/                          # data dir
  β”œβ”€β”€ bonesai-histo/              # BonesAI H&E histopathologies
  β”‚   └── sample_n/               # Bone sample dir
  β”‚       β”œβ”€β”€ mask binary/        # Osteocytes manual segmentations
  β”‚       β”‚   └── mask.png
  β”‚       └── tissue images/      # WSI H&E histopathologies
  β”‚           └── histo.tif
  β”œβ”€β”€ bonesai-microct/            # BonesAI SR-microCT
  β”‚   └── sample_n/               # Bone sample dir
  β”‚       β”œβ”€β”€ mask binary/        # Lacunae manual segmentations
  β”‚       β”‚   └── mask.png
  β”‚       └── tissue images/      # WSI SR-microCT
  β”‚           └── microct.tif
  └── DeepLIIF-mm/                # DeepLIIF dataset
      β”œβ”€β”€ ihc/                    # DeepLIIF Immunochemistry (IHC) 
      β”‚   β”œβ”€β”€ train/              # IHC DeepLIIF training dataset
      β”‚   β”‚   β”œβ”€β”€ masks/          # DeepLIIF training ground-truth segmentations patches
      β”‚   β”‚   β”‚   └── mask.png
      β”‚   β”‚   └── tissue images/  # IHC DeepLIIF training patches
      β”‚   β”‚         └── ihc.png
      β”‚   β”œβ”€β”€ val/                # IHC DeepLIIF validation dataset
      β”‚   └── test/               # IHC DeepLIIF testing dataset
      β”œβ”€β”€ dapi/                   # DeepLIIF DAPI 
      └── pm/                     # DeepLIIF mpIF 

Implementation

1. Hardware pre-requisites

We run our training-validation-testing experiments on an AMD Ryzen 7 5800X @3.8 GHz with a 24 GB NVIDIA RTX A5000 GPU. Different hardware configurations have shown slight performance variations (1-2%).

2. Dependencies

git clone https://github.com/isabellapoles/LOTUS.git
cd LOTUS

3. Train the model

First, we trained the osteocyte segmentation single-modal model (teacher model) with H&E. It comes pre-trained on the NuInsSeg dataset.

python3 train_s_bonesai.py --checkpoint-path './checkpoint/bonesai' \
                          --dataset-path './data' \
                          --model_configs 'config_s_bonesai.py' \

Next, the histopathology-enhanced model for lacunae segmentation (student model) on SR-microCT images is trained.

python3 train_t_bonesai.py --checkpoint-path './checkpoint/bonesai' \
                          --dataset-path './data' \
                          --model_configs 'config_t_bonesai.py' \

The model parameters, hyperparameters, pre-trained weights, and checkpoint variables to be used for the teacher and the student have to be specified in the configs/config_x_bonesai.py file variables.

The same holds if you want to reproduce the results with the DeepLIIF dataset. First, we pretrained the segmentation single-modal model (teacher model) with IHC.

python3 train_s_deepliif.py --checkpoint-path './checkpoint/DeepLIIF' \
                            --dataset-path './data' \
                            --model_configs 'config_s_deepliif.py' \

Next, the IHC-enhanced model for cell segmentation (student model) on DAPI images is trained.

python3 train_t_deepliif.py --checkpoint-path './checkpoint/DeepLIIF' \
                            --dataset-path './data' \
                            --model_configs 'config_t_deepliif.py' \

The model parameters, hyperparameters, pre-trained weights, and checkpoint variables to be used for the teacher and the student have to be specified in the configs/config_x_deepliif.py file variables.

4. Test the model

To test the student lacunae segmentation model on SR-microCT:

python3 test_bonesai.py --checkpoint-path './checkpoint/bonesai' \
                        --dataset-path './data' \
                        --model_configs 'config_s_bonesai.py' \

To test the student cell segmentation model on DAPI IF images:

python3 test_deepliif.py --checkpoint-path './checkpoint/DeepLIIF' \
                        --dataset-path './data' \
                        --model_configs 'config_s_deepliif.py' \

The model's pre-trained weights and checkpoint variables for the student must be specified in the configs/config_x_deepliif.py file variables.

5. Model weights

Model weights are available in the ./checkpoint directory.

Citation

If you find our work useful in your research, please consider citing our paper:

@inproceedings{poles2024letting,
  title={Letting Osteocytes Teach SR-MicroCT Bone Lacunae Segmentation: A Feature Variation Distillation Method via Diffusion Denoising},
  author={Poles, Isabella and Santambrogio, Marco D and D’Arnese, Eleonora},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={383--393},
  year={2024},
  organization={Springer}
}

Acknowledgments

This work was supported by the Polisocial Award 2022 - Politecnico di Milano. The authors acknowledge F. Buccino and M. Vergani for their expertise on bone lacunae and osteocyte mechanics, Elettra Sincrotrone Trieste for providing access to its synchrotron radiation facilities, A. Zeni and D. Conficconi for valuable suggestions and discussions and NVIDIA Corporation for the Academic Hardware Grant Program.

Parts of our code are taken from DiffKD.

Contact

Isabella Poles (isabella.poles@polimi.it), Eleonora D'Arnese (eleonora.darnese@polimi.it)