Open GuobinZhangTJU opened 4 years ago
I have sent a email to the organizer to download the competition data, but there was no reply. Can you share your database used in this work? Your help will be greatly appreciated. Many thanks!
There is no steps to train the model, and no h5 file has been uploaded to any folder. Not possible to test
Hi @GuobinZhangTJU so sorry for such a belated reply. It was a competition and the datasets were under lisence terms that makes us unable to share it with anyone without the organizer's consent. So sorry about that :( However, I believe recently they have published a paper based on their dataset: https://arxiv.org/pdf/2201.00458.pdf so probably they might respond now!
Hi @ldelaoa sorry for that. I was in a hurry when I uploaded the files to forgot to share the training instructions. here it is: You first need to run the generate_data_from_dicom.py script to prepare the raw data and then simply run the model_train_final.py script to train the model. The first script will put the data in the correct format so you dont need to worry about anything else. However, this code will work with this competition data only. To make it work with other dataset, it might need some changes.
I am closing this issue now, feel free to reopen it if you have any further questions.
Hello @udaykamal20 thanks for the quick response. I belive that in order to run the code succesfully I need the Recurrent_3D_DenseUnet_model.h5 file. Do you happen to have it?
Hello again @ldelaoa, actually that is the trained weight file for the model. You will need that only if you want to reproduce our submitted score. Unfortunately, I don't have access to that file anymore :( However, you can always train your own model following the instructions in my previous reply, save the weight, and test the network on your own dataset. Hope that helps.
Thanks again for the quick answer. I just ran generate_data_from_dicom.py but there is no main function, only the definition of functions that are later called by model_train_final.py, but I can´t seem to understand the logic on how to create the dataset from dicom to png. As the h5 file is no longer available, Could you help me out with some more instructions on how to process the dataset?
Hey @ldelaoa yes sure I can. I am sorry i just realized the main function seems to be missing in that generate_data_from_dicom.py file. So all you need is to just call this function: https://github.com/udaykamal20/Team_Spectrum/blob/d0f52e7d0f9353e6b44c4d897c59ee7136da395f/src/generate_data_from_dicom.py#L317 Here the input arguments are path (str): path of the directory that has DICOM files in it, e.g. folder of a single patient, writepath: path of the directory where you want to save the image and masks, and image format by default png. This should help you to process the raw Dicom files. Feel free to let me know if you require further assistance.
Hello @udaykamal20 thanks for the advice and reopening the issue.
I did ran the function with the proper path. After fixing a few typos and adding extra validation as Modality == 'CT' before reading pixel array from the images, I was able to save as png images. Yet I noticed one important thing here:
In the file model_training_final.py it expects root/train/images/LUNG/.png, yet after running generate_data_from_dicom.py my folders are /root/images/*.png.
Any ideas of what could be happening?
Hello, nice work, thank you for sharing with us! I have a question, Can you help me? thanks a lot! The code you presented can work now ? Can you tell me the working steps? Should I run the "generate_data_from_dicom.py" firstly? but I didn't find the main function~Can you tell me your working steps? Please help me, thank you very much. Best wishes, waiting for your reply.