Open cdn940912 opened 8 months ago
this is my train.log
{ "batch_size": 2, "epoch": 800, "lr": 0.0001, "weight_decay": 0.0001 }
and dataset.log
{ "dataset_root": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/", "preprocessed": false, "denoised": false, "PBDA": false, "cropped": true, "crop_size": 576, "stride": 576, "black_ratio": 1, "denoising_size": 4096, "resolution": 0, "data": "se", "train_image_dir": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/Original/train", "train_mask_dir": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/labels/train", "val_image_dir": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/Original/val", "val_mask_dir": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/labels/val", "test_image_dir": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/Original/test", "test_mask_dir": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/labels/test", "train_image_dir_cropped": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/Original/train_crop576_s576", "train_mask_dir_cropped": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/labels/train_crop576_s576", "val_image_dir_cropped": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/Original/val_crop576_s576", "val_mask_dir_cropped": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/labels/val_crop576_s576", "test_image_dir_cropped": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/Original/test_crop576_s576", "test_mask_dir_cropped": "/home/chendanni/IDRiD-Eye-Fundus-Dataset-Lesion-Segmentation-main/Square_Format/labels/test_crop576_s576" }
my result why so low
{ "auc_pr": 0.40925517399049344, "metrics": { "iou": 0.2186624738086359, "accuracy": 0.3365170680010294, "recall": 0.6082205435713729, "fscore": 0.339803387957458, "precision": 0.23575919849551885 } }
Since the project has not finished yet, I have not share the best parameters ,however for Se, because data set is low, you need to give black ratio 0 (do not delete any black cropped image) also make sure to set stride 288 to even get more cropped data for train.lastly, the pbda method and the global local training process should be used.All the processes will be added .I have shared the begining part of the script, the full script will be shared after article is published
Looking at the code, dataset_conf['data'] corresponds to a label mask. Can't all four categories be trained at once?