coreeey / OCT2Former

The official code of OCT2Former for Retinal OCT-Angiography vessel segmentation
12 stars 0 forks source link

Clarification on the --data_root_aux Argument in OCT2Former Code #2

Closed mahlaaa2 closed 1 week ago

mahlaaa2 commented 2 weeks ago

Hello,

First of all, I want to express my gratitude for making the code for the paper "OCT2Former: A retinal OCT-angiography vessel segmentation transformer" publicly available. It is a significant contribution to the research community.

I am currently working with the OCT2Former code and encountered a question regarding the --data_root_aux parameter in the settings_args.py file. The argument is defined as follows:

parser.add_argument("--data_root_aux", type=str, default="")

I am unclear on what the purpose of --data_root_aux is and what data should be provided for this parameter. The paper mentions the use of datasets such as OCTA-SS, ROSE-1, and OCTA-500 (subsets OCTA-6M and OCTA-3M), but I’m not sure how --data_root_aux fits within this context.

Could you please clarify what --data_root_aux refers to and how it should be configured?

Thank you for your time and assistance. Your help will greatly assist me in successfully running the code and replicating the experiments.

Best regards,

Mahla

coreeey commented 2 weeks ago

The 'data_root_aux' refers to the data root of the OCT images that correspond to each OCTA image in the OCTA-500 dataset and is used exclusively within this dataset.

mahlaaa2 commented 2 weeks ago

Thank you for your previous response regarding the use of the data_root_aux parameter. Your clarification was greatly appreciated.

I would like to ask if it would be possible to share the trained models with weights or the train.py file adapted for other datasets, such as OCT-SS and ROSE1? This would be incredibly helpful for replicating the results across different datasets.

Additionally, I wanted to confirm whether the proposed model in your paper used information from both OCT and OCTA for training. Specifically, did the results reported for OCTA-3M and OCTA-6M utilize data from both modalities?

Thank you once again for your time and contributions to this important research.

Best regards,

coreeey commented 2 weeks ago

We cannot share the model weights. However, you can train the model on different datasets by using separate shell scripts, such as train3M.sh and train6M.sh. Additionally, both OCT and OCTA images are used in the OCTA-3M and OCTA-6M datasets. In our experiments, we simply concatenate these two types of images as two channels.