Once you decide to involve this dataset, you must agree to this Data Download Consent. By doing so, you accept all the risks of getting your full dataset.
Please also notice that:
=========================== conference: mmsp2022 ===============================
@inproceedings{9949547,
author = {Xue Xia and
Kun Zhan and
Ying Li and
Guobei Xiao and
Jinhua Yan and
Zhuxiang Huang and
Guofu Huang and
Yuming Fang},
title = {Eye Disease Diagnosis and Fundus Synthesis: {A} Large-Scale Dataset and Benchmark},
booktitle = {2022 IEEE 24th International Workshop on Multimedia Signal Processing (MMSP), Shanghai, China, September 26-28, 2022},
pages = {1--6},
publisher = {{IEEE}},
year = {2022},
doi = {10.1109/MMSP55362.2022.9949547},
}
========================== journal: spic open_access ================================
@article{XIA2024117151,
title = {Benchmarking deep models on retinal fundus disease diagnosis and a large-scale dataset},
journal = {Signal Processing: Image Communication},
volume = {127},
pages = {117151},
year = {2024},
issn = {0923-5965},
doi = {https://doi.org/10.1016/j.image.2024.117151},
url = {https://www.sciencedirect.com/science/article/pii/S0923596524000523},
author = {Xue Xia and Ying Li and Guobei Xiao and Kun Zhan and Jinhua Yan and Chao Cai and Yuming Fang and Guofu Huang},
}
========================== Codes ================================
Due to the extensive range of experiments conducted with various configurations across different devices and students, the provided code and weights on GitHub do not exactly replicate the experiments described in our paper. However, our model consistently maintains its top ranking in comparative evaluations. Moerver, the training log files are also provided along with the .pth
files on Mega Drive to confirm their integrity.
einops==0.7.0
distributed
torch==2.1.1+cu118
torchvision==0.16.1+cu118
simplejson==3.19.2
timm==0.4.12
iopath==0.1.10
scikit-learn==1.4.2
opencv-python==4.8.1.78
matplotlib==3.8.2
pands==2.1.4
tqdm
# The code can also be run on an MPS MacOS with Python3.9.
Pretrained weights for the comparison baselines can be downloaded from the links: resnet18, densenet, efficientnet_v2_Mega or efficientnet_v2_Google. Place the downloaded weights in the ./pre-trained/put_your_weights_here
directory.
The data loaders for different datasets or tasks are located in datasets/All_Datasets.py
, which includes annotation loading code. You can modify basic paths in config/_data/datasetConf.py
.
The well-trained weights of our model are available for various tasks: | task | weights | |
---|---|---|---|
multi-label multi-disease | GoogleDrive, Mega | ||
single-label multi-disease | GoogleDrive, Mega | ||
DR grading | GoogleDrive, Mega | ||
AMD grading | GoogleDrive, Mega | ||
Laser | GoogleDrive, Mega | ||
RVO | GoogleDrive, Mega | ||
Pathological Myopia | GoogleDrive, Mega | ||
<!-- | Hypertension Retinopathy | [GoogleDrive](), [Mega]() | --> |
To train the model, run the following command from your terminal
bash train-x.sh
or use the Python script
python main.py
--useGPU 0 \
--dataset EDDFS_dr \
--preprocess 7 \
--imagesize 448 \
--net coattnet_v2_withWeighted_tiny \
--epochs 51 \
--batchsize 32 \
--lr 0.00009 \
--numworkers 4 \
--pretrained False \
--lossfun focalloss
python test.py
--useGPU 0 \
--dataset EDDFS_dr \
--preprocess 7 \
--imagesize 448 \
--net coattnet_v2_withWeighted_tiny \
--numworkers 4 \
--weight your_model_file \
========================== Directory Structure ================================
project-root/
├── README.md
├── train-x.sh
├── test.py
├── main.py
├── mask.png
├── config/
│ ├── _data/
│ │ └── datasetConf.py
├── datas/
│ ├── EDDFS/
│ │ └── Annotation/
│ │ └── .csv
│ ├── other_datasets/
├── datasets/
│ └── All_Datasets.py
├── models/
│ └── ...
├── pre-trained/
├── results/
│ └── evaluation_during_training.csv
├── runs/
│ └── training_tensorboard_summary.local
├── tools/
├── weights/
│ └── well-trained.pth/