Open sima05020 opened 1 week ago
Hello, thanks for your question.
Our codes have two settings.
I think you are in the second case.
In this case, you need to:
First, pre-train the model in FSC-147. Please directly run this shell https://github.com/zhiyuanyou/SAFECount/blob/main/experiments/FSC147_to_CARPK/pretrain/pretrain_torch.sh.
Second, organize your support images as exemplar.json
. Please refer to https://github.com/zhiyuanyou/SAFECount/blob/main/data/CARPK_devkit/exemplar.json and organize your own json file.
Then, with the pre-trained weights and exemplar.json
, you need to set the exemplar.json
in your own config file (Please refer to https://github.com/zhiyuanyou/SAFECount/blob/main/experiments/CARPK/config.yaml) and run eval shell (The same as https://github.com/zhiyuanyou/SAFECount/blob/main/experiments/CARPK/eval_torch.sh).
Feel free to let me know if you have further questions.
I was able to successfully run the test case using train_val.py, but when I wanted to test the effect of this code on the general image case, I couldn't figure out how to get the count results by inputting only the query image we had prepared and the cropped support image. Do I have to prepare a json file with the boxes and points information?
If anyone is reading this, I would like to know.
I write this description with DeepL , so sorry for some discomfort.