Verg-Avesta / CounTR

CounTR: Transformer-based Generalised Visual Counting
https://verg-avesta.github.io/CounTR_Webpage/
MIT License
92 stars 9 forks source link

Without Pretraining #37

Closed Haalum closed 10 months ago

Haalum commented 10 months ago
  1. I am just curious how the model performs without doing any pertaining. I only want to train the model on FSC147 data. I commented out the following line in FSC_finetune_cross.py. misc.load_model_FSC(args=args, model_without_ddp=model_without_ddp)

Am I missing anything?

  1. Since I'm not fine-tuning, only training on FSC147 data. Any idea, what should be my epoch number to get an optimal result?

Thanks!

Verg-Avesta commented 10 months ago

As FSC-147 is a quite small dataset, it will be hard to train a good feature extractor without any kind of pre-training.

But if you want to try it, you should also remove this line which freezes the image encoder, besides what you have done.

For the epoches, maybe the larger the better? But I am not quite sure whether it will work.

Haalum commented 10 months ago

I understand. I'm just curious. Thank you.