abhishekdiphu / Automatic-keypoint-localization-in-2dmedical-images

Using Yu Chen et.al AdversarialPoseNet for landmark localization in 2D medical images (lower extremities)
7 stars 0 forks source link

Cannot run the codes #1

Open A465539338 opened 3 years ago

A465539338 commented 3 years ago

Hi, abhishekdiphu, I follow the instructions and create a successful environment. But I cannot run the codes because there are still errors in this version.
Actually, there are so many differences compared to https://github.com/rohitrango/Adversarial-Pose-Estimation. Could you please share the purpose to modify the codes when applying to medical images? By the way, I run the code provided by rohitrango. But the training process seems hard to coverage on the LSPet dataset. And the predicted images are far from correct pose heatmaps. Do you have any suggestions on running your codes or the codes from rohitrango?
I would be much appreciated if you could help.

abhishekdiphu commented 3 years ago

in regards to medical images , I have used a internal fh kiel package (not publically available) for loading the medical image data. I have use a differnt stacked hour glass , and there are lots of changes like metric , some preprocessing as well.And the most important , the cost function for training the adversarial model.

How ever if you just want to try on lsp and mpii datasets then try this repository https://github.com/YUNSUCHO/Localizing-human-pose-in-adversarial-way-with-MPII-dataset or https://github.com/YUNSUCHO/Adversarial-Pose-Estimation-with-LSP-dataset or https://github.com/roytseng-tw/adversarial-pose-pytorch

I stoped maintaining the repository actually.

A465539338 commented 3 years ago

Thanks for the reply. The three given projects provide more information about Adversarial Posenet. There is a pre-trained generator loaded in all the projects. How do you get the pre-trained model?

A465539338 commented 3 years ago
#modelpath = torch.load('train-model-19/supervised-medical-660-lr-0001/new_exp54_batchsize1/model_50.pt')
#generator_model = modelpath['generator_model']
#print(generator_model)
#print(generator_model)
#print(discriminator_model)

#----------------------------------------------------------------------------------#
#------------------------fine tuning the model-------------------------------------#
# pretrained model being Load model
#modelpath = torch.load('train_model04/model_228_5000.pt')
#generator_model = modelpath['generator_model']
#discriminator_model = modelpath['discriminator_model']
#optim_gen = modelpath['optim_gen']
#optim_disc = modelpath['optim_disc']
#print("pretrained model Loaded")
#--------------------------------------
abhishekdiphu commented 3 years ago

These pretrained models cannot be publicly available ..since I have used sured medical images that's belongs to the university for training purposes.

Regarding other repos .I have no idea If there is a pretrained model available or not .

A465539338 commented 3 years ago

I understand. Could you tell me how can I get a pre-trained model after training the model? In my point of view, if I only get the loss from the generator and optimize it with SGD, that's the pre-training process for the whole model. But it seems the model is not predicting the correct heatmap if I start with training the generator only.