DIAGNijmegen / ULS23

Repository for the Universal Lesion Segmentation Challenge '23
https://uls23.grand-challenge.org/
28 stars 4 forks source link
artificial-intelligence challenge ct medical-imaging

ULS23_banner.png

ULS23 Challenge Repository

Repository for the Universal Lesion Segmentation Challenge '23

Labels

The annotations folder contains the labels for the training data of the ULS23 Challenge.

To download the associated imaging data, visit:

Note: when using MONAI to work with the data please ensure you are on version >= 1.2.0. We have had reports of problems when loading the data using older versions.

Novel data annotation procedure:

ULS23_DeepLesion3D: Using reader studies on GrandChallenge, trained (bio-)medical students used the measurement information of the lesions in DeepLesion for 3D segmentation in the axial plane. Each lesion was segmented in triplicate and the majority mask was used as the final label. Lesions were selected using hard-negative mining with a standard 3D nnUnet trained on the fully annotated publicly available data. We compared the axial diameters extracted from the prediction of this model to the reference measurements provided by DeepLesion and included the lesions with the worst performance. We selected lesions to be included such that they were representative of the entire thorax-abdomen area. This meant 200 abdominal lesions, 100 bone lesions, 50 kidney lesions, 50 liver lesions, 100 lung lesions, 100 mediastinal lesions and 150 assorted lesions.

ULS23_Radboudumc_Bone & ULS23_Radboudumc_Pancreas: VOI's in these datasets are from studies conducted at the Radboudumc hospital in Nijmegen, The Netherlands. Lesions were selected based on the radiological reports mentioning bone or pancreas disease. An experienced radiologist identified and then segmented the lesions in 3D. ULS23_Radboudumc_Bone contains both sclerotic & lytic bone lesions.

If you notice any problems with an images or mask, please launch an issue on the repo and we will try to correct it.

Baseline Model

The baseline_model folder contains the minor code adaptations to the nnUnetv2 framework that are necessary to run the baseline model for the challenge. Simply copy over the files in the nnunetv2 subfolder into your local nnunetv2 installation location. To prevent resampling we have created a dummy resampling function 'no_resampling_data_or_seg_to_shape', which can be called in the plans files. We also provide additional trainer classes to be able to train for more epochs and with a smaller starting learning rate. We used these when fine-tuning our baseline model pre-trained on the weakly annotated data.

Model weights and the algorithm container of the best performing baseline model for the weakly-annotated data are stored on Zenodo. It can also be used directly on novel data from GrandChallenge.

We also include the data split used for testing on the combined, fully-annotated training datasets and the plans files.

The GC_algorithm folder will contain the code for transforming the nnUnetv2 baseline into a GrandChallenge compatible algorithm.

Data Processing Code

The data_processing folder contains the code used to prepare the source datasets into the ULS23 format: cropping VOI's around lesions and preparing the sem-supervised data using GrabCut.

Contact Information

max.degrauw@radboudumc.nl, alessa.hering@radboudumc.nl

Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands

References: