link:
https://pan.baidu.com/s/1nk0EmKEBVhtKAnctdo4veA
Extract code:
8nel
The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is used for retinal vascular segmentation. It consists of a total of 40 color fundus images in JPEG format; These include 7 cases of abnormal pathology. The images were acquired as part of a diabetic retinopathy screening program in the Netherlands. Each image has a resolution of 584*565 pixels, and each color channel has 8 bits. The 40 images were divided equally into 20 images for the training set and 20 images for the test set. In both sets, each image has a circular field of view mask (FOV) with a diameter of about 540 pixels.
link:
https://pan.baidu.com/s/1TeXuIS21OfRLoh6DiwZuIg
Extract code:
o9sd
The STARE (Structured Analysis of the Retina) dataset is a public retinal vascular segmentation dataset created by Dr. Michael Goldbaum of the University of Vermont, USA. The dataset includes 20 fundus images with a resolution of 700x605 pixels. These images include various lesions such as macular degeneration, hypertensive retinopathy, diabetic retinopathy, etc. Each image provides a hand-labeled vascular segmentation map that can be used to train and evaluate the vascular segmentation algorithm.
We selected 10 images from STARE dataset as the training set and another 10 images as the test set, and converted them into grayscale images. The selected images include over-exposure and under-illumination, and low contrast in the optic disk region, which can be used to test the robustness of the model.
Place the dataset in the specified directory as required. For example, the DRIVE dataset needs to be placed in the following format:
./DRIVE
/train
/image
/01_training.png
/02_training.png
/03_training.png
...
/20_training.png
/label
/01_training.png
/02_training.png
/03_training.png
...
/20_training.png
/test
/image
/01_testing.png
/02_testing.png
/03_testing.png
...
/20_testing.png
/label
/01_testing.png
/02_testing.png
/03_testing.png
...
/20_testing.png
The DRIVE dataset and STARE dataset are processed grayscale images. If there are RGB images in your dataset, convert the RGB images to grayscale by executing the following command.
python RGB2gray.py
python train.py
After the training, the weight file UNet.pth is generated in the home directory.
We provide our weight file pretrained using the DRIVE dataset (loss functions during training include cross-entropy loss and dice loss) .
link:
https://pan.baidu.com/s/1Zexs93LNct2n1lCjNIlUoQ
Extract code:
3c4x
Place an image in png format in the predict directory. The file name should be img.png.
We have put a processed image in the predict folder. You can replace it with your own image (either an RGB image or a grayscale image) .
./predict
/img.png
Run the prediction code.
python predict.py
Then the output mask can be observed in the result directory, and the file name is res.png.
./result
/res.png
Place the dataset in the specified directory as required. For example, the DRIVE dataset needs to be placed in the following format:
./DRIVE
/train
/image
/01_training.png
/02_training.png
/03_training.png
...
/20_training.png
/label
/01_training.png
/02_training.png
/03_training.png
...
/20_training.png
/test
/image
/01_testing.png
/02_testing.png
/03_testing.png
...
/20_testing.png
/label
/01_testing.png
/02_testing.png
/03_testing.png
...
/20_testing.png
The DRIVE dataset and STARE dataset are processed grayscale images. If there are RGB images in your dataset, convert the RGB images to grayscale by executing the following command.
python RGB2gray.py
python split_training_set.py
python split_testing_set.py
After running the code, a folder named DRIVE_patch will be generated, and the clipped images will be numbered sequentially and saved in this folder. This code applies to clipping the training set and the testing set.
You can modify Line 18 of split_training_set.py and split_testing_set.py to resize each patch, or modify Line 19 of split_training_set.py and split_testing_set.py to adjust the number of patches cropped from each image.
The smaller the patch size, the more patches should be cropped, or the model is at risk of underfitting; conversely, the larger the patch size, the less patches should be cropped, or the model is at risk of overfitting.
Then we can use the clipped patches for training and testing.
python equalize_patch.py
After executing the above instructions, all the images in directory './DRIVE_patch/train/image' and './DRIVE_patch/test/image' will be equalized. The resulting images will overwrite the original files with the same names.
The Histogram equalization can enhance the image contrast and improve the accuracy of retinal vessel segmentation.
python train_patch.py
After the training, the weight file UNet_patch_equalized_patch_256_50.pth is generated in the home directory.
Our hyperparameters were set as follows: the images of the training set and testing set were adjusted to 512 × 512 resolution, the slice size was 256 × 256 resolution, and 50 slices were cropped out from each image. This is a good scheme that we got after substantial comparative experiments.
We provide our weight file pretrained using the DRIVE dataset (loss functions during training include cross-entropy loss and dice loss) .
link:
https://pan.baidu.com/s/1sSMdeVUdzkqxl0bAp84cwQ
Extract code:
70ru
Place an image in png format in the predict directory. The file name should be img.png.
We have put a processed image in the predict folder. You can replace it with your own image (either an RGB image or a grayscale image) .
./predict
/img.png
Run the prediction code.
python predict_through_patch.py
Then the output mask can be observed in the result directory, and the file name is res.png.
./result
/res.png
We provide links to prototypes of early system interfaces:
https://modao.cc/proto/6XGWX9WNs68d6e9PlYNRWw/sharing?view_mode=read_only
We also provide links to prototypes of optimized system interfaces:
https://modao.cc/proto/7HNGJzKTs6qg0d0DDMRYK1/sharing?view_mode=read_only
We also provide links to the latest versions of the prototypes:
https://modao.cc/proto/escyHojJs6oxenQSSRrRUY/sharing?view_mode=device&screen=skp0usdbTtC7bupkfF2YTc&canvasId=sskp0usdTtC7bvgnzWotNT
python window_for_app.py
You need to configure the necessary libraries in advance, such as the Tkinter Library.
You can test your application's front and back-end functionality in cmd.exe, such as whether the various data-handling functions are working properly, or whether the position and color of the front-end controls are correct.
pytest test_function.py
pytest test_position.py
pytest test_color.py