Open AhsanAhmad3 opened 1 month ago
Hi Ahsan, if you only want to use this model in inference I recommend you to use directly this HuggingFace space space that is configured to be use directly. You can also download that code (the app.py) where you have a segment function that takes an input image and provides the segmentation.
The source code in this repo is organized with the training files in the main folder, and the inference and results in python notebooks under the Results/ folder. It was primarly meant to be the source code for the paper.
Regarding the data to use, any chest x-ray frontal image can be segmented as long as it enters the model with a 1024x1024 shape. In the HuggingFace space I'm using free-to-use images, as I cannot share data. In the original paper I used Shenzhen, Montgomery and Padchest datasets. In a more recent work I also segmented MIMIC-CXR, Chestx-ray8, VinDr-CXR, CheXpert images.
I hope this helps.
Can you please help as i am unable to understand how to run the code. I would be grateful to you if you could help in this regard. I want to present this paper to my fellow student for the work that you have put in and for that I need to run the code but I am unable to figure out how to run the code what files to run first and what data set to use ? As the JSRT Data is not Valid at the moment.