jia-wan / Fine-Grained-Counting

13 stars 3 forks source link

Some issues about the code implementation #1

Open lmy98129 opened 3 years ago

lmy98129 commented 3 years ago
  1. The hdf5 file of ground truth is unavailable and causes errors. At the line 57 of “/datasets/fine_grained_dataset.py”, the code tries to load the hdf5 file of ground truth (including “fix4, fix16, adapt, dot”, “mask.h5” and “_seg.h5”) which does not exist in the dataset you uploaded on Google Drive. Only “annotation.json” is provided in this dataset. If you can provide the hdf5 file and the “json-to-hdf5” conversion code, we will re-product the baseline results free from these “File Not Exist” errors.
  2. Only the Stacked Hourglass are implemented. The “models/networks.py” only comprises the Stacked Hourglass for proposed “Density-aware Feature Propagation”. Of course it is the best approach evaluated by experiments, but we are still curious about the implementation details of graph-based method used in this paper, including GCN and CRF. It will be helpful for us to understand how these methods performs on fine-grained counting task better If the code implementation of them is available.
  3. Environment settings for the code running are not described. The specific versions of all required packages are unknown, e.g., CUDA, PyTorch, OpenCV, Pillow (PIL). There might be some difference on the reproduction results based on the inconsistent environments. It will be also beneficial for us if the environment settings are specified.

Great thanks to you for all the outstanding contributions on the new task, dataset and baseline for the research community! For the further research on this novel and interesting task, your feedbacks and any other suggestions are greatly appreciated. @jia-wan

jia-wan commented 3 years ago
  1. The hdf5 gt files are large, which are usually generated locally. Please refer to the code for more details.
  2. I will commit the other structures later.
  3. Please find the environmental info in the attached file. Thanks for your interest. requirements.txt
lmy98129 commented 3 years ago
  1. The hdf5 gt files are large, which are usually generated locally. Please refer to the code for more details.
  2. I will commit the other structures later.
  3. Please find the environmental info in the attached file. Thanks for your interest. requirements.txt

Thank you for your detailed replies. We are helped greatly by the information you provided above and looking forward to the future release of the code implementation of other structures. However, there are still some problems to feedback:

  1. Generate different types of ground truth hdf5 files including "fix4, fix16, adapt, dot". We have checked the matlab code you provided by the link, but it is only capable of generating one type of file rather than four. And we are also curious about the meaning of these four types.
  2. Cloud drive is an alternative to distribute the ground truth files. Now that the dataset are uploaded to Google Drive for open access, the ground truth files could also be provided in this way if they are too large to be directly attached into this GitHub repository.
  3. Environment requirements need a slightly completion in detail. We are glad to receive the requirements.txt and follow it to construct the environment. But the versions of the packages beyond the PIP are still unclear, including Python (2.x or 3.x, 3.5 or 3.8 ?), CUDA (9.x, 10.x or 11.x ?), cudnn and Nvidia Driver. Maybe they are not the crucial factors to reproduction of the result but still helpful for us to avoid incompatibility between the versions.
lmy98129 commented 3 years ago

Hello, Wan! We have run the generating code you provided successfully. However, there are more problems occurred:

  1. The visualization of the generated ground truth is extremely inconsistent with the figures shown in your paper. The code you provided only convolves a fixed-size gaussian kernel on the point annotations without any further processing.
  2. Meanwhile, we have learned from some related papers that KDTree is widely used for pre-processing of crowd counting. We follow the gaussian_filter_density code at jupyter notebook block "In [6]" from here and preprocess the point annotations of the dataset.
  3. Unfortunately, KDTree-preprocessed ground truth is still not similar to that visualized in the paper. Maybe it is caused by that the code provided doesn't support the json input and hdf5 output and we developed by ourselves. Could you please provide further instructions about the ground truth generation? Or could you please upload the ground truth to cloud drive e.g. Google Drive as we have suggested in last comment?
    截屏2021-06-12 下午1 32 27 截屏2021-06-12 下午1 08 37
jia-wan commented 3 years ago

In the paper, I use 3 types of density maps for comparison. According to your visualization, it seems that your density variance is small. Can you try a larger density variance for generating a density map?

MondoGao commented 5 months ago

We encounter the same problem here, could you elaborate detailed procedure to generate the density map from this dataset?