MIC-DKFZ / nnUNet

Apache License 2.0
5.67k stars 1.71k forks source link

RGB png dataset bechmarking #1948

Open HamzaFarooq013 opened 7 months ago

HamzaFarooq013 commented 7 months ago

Hi, I am trying to bechmark nnUnet on my own 2D ( RGB .png) dataset but got errors in (verify_dataset_integrity). Can you please share some configuration setting for normal .png datset for binary class for benchmarking on nnUnet. (Thanks in advance)

my settings for Json: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% overwrite_json_file = True #make it True if you want to overwrite the dataset.json file in Dataset_folder json_file_exist = False

if os.path.exists(os.path.join(task_folder_name,'dataset.json')): print('dataset.json already exist!') json_file_exist = True

if json_file_exist==False or overwrite_json_file:

json_dict = OrderedDict()
json_dict['name'] = dataset_name
json_dict['description'] = "multi_lables_uniclass"
json_dict['tensorImageSize'] = "3D"
json_dict['reference'] = "ABC"
json_dict['licence'] = "ABC"
json_dict['release'] = "0.1"

#you may mention more than one modality
json_dict['channel_names'] = {
   "0": "R",
   "1": "G",
   "3": "B"
}

# set expected file ending
json_dict["file_ending"] = ".png"

#label names should be mentioned for all the labels in the dataset
json_dict['labels'] = {
    "background": 0,
    "white matter": 1
}

train_ids = os.listdir(train_label_dir)
test_ids = os.listdir(test_dir)
json_dict['numTraining'] = len(train_ids)
json_dict['numTest'] = len(test_ids)

with open(os.path.join(task_folder_name,"dataset.json"), 'w') as f:
    json.dump(json_dict, f, indent=4, sort_keys=True)

if os.path.exists(os.path.join(task_folder_name,'dataset.json')):
    if json_file_exist==False:
        print('dataset.json created!')
    else:
        print('dataset.json overwritten!')

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

!nnUNetv2_plan_and_preprocess -d 876 -c 2d_fullres --verify_dataset_integrity

Error: Unexpected number of modalities. Expected: 3. Got: 1.

HamzaFarooq013 commented 7 months ago

I have kind of solved that issue but why this shape error is happend?

As data is verified but still dim error

Fingerprint extraction... Dataset876_Hamzabenchmark Using <class 'nnunetv2.imageio.natural_image_reager_writer.NaturalImage2DIO'> as reader/writer

#################### verify_dataset_integrity Done. If you didn't see any error messages then your dataset is most likely OK! ####################

Experiment planning... 2D U-Net configuration: {'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 49, 'patch_size': array([256, 256]), 'median_image_size_in_voxels': array([256., 256.]), 'spacing': array([1., 1.]), 'normalization_schemes': ['ZScoreNormalization'], 'use_mask_for_norm': [False], 'UNet_class_name': 'PlainConvUNet', 'UNet_base_num_features': 32, 'n_conv_per_stage_encoder': (2, 2, 2, 2, 2, 2, 2), 'n_conv_per_stage_decoder': (2, 2, 2, 2, 2, 2), 'num_pool_per_axis': [6, 6], 'pool_op_kernel_sizes': [[1, 1], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'conv_kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'unet_max_num_features': 512, 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'batch_dice': True}

Using <class 'nnunetv2.imageio.natural_image_reager_writer.NaturalImage2DIO'> as reader/writer Plans were saved to d:\Hamza Farooq\T_Net\T_Net\Data_nnUnet\nnUNet_preprocessed\Dataset876_Hamzabenchmark\nnUNetPlans.json Preprocessing... Preprocessing dataset Dataset876_Hamzabenchmark Configuration: 2d... Configuration: 3d_fullres... INFO: Configuration 3d_fullres not found in plans file nnUNetPlans.json of dataset Dataset876_Hamzabenchmark. Skipping. Configuration: 3d_lowres... INFO: Configuration 3d_lowres not found in plans file nnUNetPlans.json of dataset Dataset876_Hamzabenchmark. Skipping.

#########################################################################

Exception in background worker 0: could not broadcast input array from shape (3,301,301) into shape (1,301,301) using pin_memory on device 0 Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings... Traceback (most recent call last): File "d:\Hamza Farooq.venv\lib\site-packages\batchgenerators\dataloading\nondet_multi_threaded_augmenter.py", line 53, in producer item = next(data_loader) File "d:\Hamza Farooq.venv\lib\site-packages\batchgenerators\dataloading\data_loader.py", line 126, in next return self.generate_train_batch() File "d:\Hamza Farooq.venv\lib\site-packages\nnunetv2\training\dataloading\data_loader_2d.py", line 92, in generate_train_batch seg_all[j] = np.pad(seg, ((0, 0), *padding), 'constant', constant_values=-1) ValueError: could not broadcast input array from shape (3,301,301) into shape (1,301,301)

Ismatse commented 7 months ago

Hi, @HamzaFarooq013 Have you been able to solve that problem? I have kind of the same shape issue while training and I have no idea why.

HamzaFarooq013 commented 7 months ago

Hello, I am not able to solve this issue. but Greyscale channel will help to solve this problem for data verification but In training that cause problem of model input dim.

On Tue, Feb 20, 2024 at 12:19 AM IsmatseUPM @.***> wrote:

Hi, @HamzaFarooq013 https://github.com/HamzaFarooq013 Have you been able to solve that problem? I have kind of the same shape issue while training and I have no idea why.

— Reply to this email directly, view it on GitHub https://github.com/MIC-DKFZ/nnUNet/issues/1948#issuecomment-1953048611, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATFUGTIZPHUJSH2RDTK3D3LYUOQV5AVCNFSM6AAAAABDFPXRTWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNJTGA2DQNRRGE . You are receiving this because you were mentioned.Message ID: @.***>