david8862 / tf-keras-stacked-hourglass-keypoint-detection

end-to-end Stacked Hourglass Networks pipeline for single-object keypoint estimation, implemented on tf.keras
MIT License
26 stars 11 forks source link

multistage loss #6

Open soans1994 opened 2 years ago

soans1994 commented 2 years ago

hello author,

i have some queries regarding your work. I have worked recently with multistage model. for example, I used 5 stages, 5 outputs and 5 individual losses. (So, i had to use generate 5 similar labels to compute loss.),(i think even if i dont return multiple labels, the loss is computed from the single lable returned from the dataloader.)

for stacked hourglass, should i use similar method? edit: im training your hourglass model with mse loss, how many stages are better? also how can i choose features 128 or 256? whats the difference? more number of filters?

thank you

david8862 commented 2 years ago

@soans1994

  1. Yes, and currently the data generator has provided multi labels as a list for train.
  2. Original paper use stack=8 and features=256, but generally stack=2 has been enough to get a reasonable result
soans1994 commented 2 years ago

thank you,

  1. Is it okay if i dont return list, but a single label?
  2. okay i will try these values.

I have updampled the stage output with 4x4 kernel to get 96x96 output since my input image is very less 96x96 image. Is it okay if i upscale the stage outputs to visualise while inferencing.

david8862 commented 2 years ago

thank you,

  1. Is it okay if i dont return list, but a single label?

should be no since keras data generator doesn't support auto duplicate label in loss

  1. okay i will try these values.

I have updampled the stage output with 4x4 kernel to get 96x96 output since my input image is very less 96x96 image. Is it okay if i upscale the stage outputs to visualise while inferencing.

I've not tried that and not sure about the model performance