leeyeehoo / CSRNet-pytorch

CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes
642 stars 259 forks source link

About get points coordinates from the result density map #17

Open MSoulPlayer opened 5 years ago

MSoulPlayer commented 5 years ago

Thank you for releasing the code. The net use the density map as the Groudtruth. I wonder Is it possible to generate the points coordinates from the density map ?

BedirYilmaz commented 5 years ago

Hi, just out of couriosity, why dont you use the original ground-truth annotation files (.mat files) if you need to find the point coordinates? Because I am afraid that de-bluring the density map could be a lot more difficult than just reading the annotations.

I think what you are asking has something to do with image de-bluring, gaussian deconvolution or image sharpening.

MSoulPlayer commented 5 years ago

@BedirYilmaz yeah, I'm focusing on the head location(annotated) of the crowd, the most naive idea is getting the point coordinates from the predict density map.

BedirYilmaz commented 5 years ago

I doubt the exactness that you may achieve after retrieving the location of the heads would be sufficient with what you get by working on a ground truth density map.

It would be even harder to do it with estimated density map, since it is a lot noisier.

I think I could not help you on this one but following might give you an idea: http://crcv.ucf.edu/projects/crowd/