Hzzone / PseCo

(CVPR 2024) Point, Segment and Count: A Generalized Framework for Object Counting
89 stars 7 forks source link

About weight file loading failed #7

Closed BlouseDong closed 6 months ago

BlouseDong commented 6 months ago

Sorry to bother you, I was fascinated by your excellent work. Try running your code, I've followed your steps to cat's weight file for the preprocessed dataset. At that time, these combined weights could not be loaded normally. Is there a problem with my cat? By the way, these files are unusually large. 微信图片_20240516171712 微信图片_20240516171816

Hzzone commented 6 months ago

It is large because I have stored the image features and all predictions in these files for FSC147 dataset. It is the preprocessed training data, instead of the weight file. You can try the demo in the wild to avoid download these files. I will check your issue.

Hzzone commented 6 months ago

I have tried downloading the files from google drive and they work well.

image

You can try these python code to download files:

>>> import gdown
>>> gdown.download_folder('https://drive.google.com/drive/folders/1RwxDPiL3dcUJc14arrvJkgTpI2XSddGF')
BlouseDong commented 5 months ago

Thank you very much for your answer, I'm interested in the zero-shot handling in demo_in_the_wild.ipynb, and I'd like to ask you a few questions:

  1. Are the weights you trained useless for demo_in_the_wild.ipynb? Because I tested and found that demo_in_the_wild.ipynb works well for the fsc147 dataset. You should be directly tested with SAM, CLIP and other pre-trained weights.
  2. Is the imported MLP_small_box_w1_zeroshot.tar file similar to the training weight file?
  3. I don't know much about the NLP field, how should I train if I only want to focus on CLIP? If I get weights, how do I use them in demo_in_the_wild.ipynb?
Hzzone commented 5 months ago

1 and 2: you have not caught my explanation. The large files are preprocessed training data, not the weight file. The weights are in checkpoints. You should read the code.