mlfoundations / open_clip

An open source implementation of CLIP.
Other
9.29k stars 923 forks source link

`force_image_size` does not work for eva models #783

Open yxchng opened 6 months ago

yxchng commented 6 months ago

setting force_image_size to 672 still give model with default 336 input size

rwightman commented 6 months ago

I never got around to hooking that up to timm based models. It's possible, but requires a bit of extra code to handle the resizing properly....

yxchng commented 1 week ago

@rwightman any update on this?

rwightman commented 1 week ago

@yxchng I actually do have some work in progress right now in timm to address https://github.com/huggingface/pytorch-image-models/issues/2190 (adding a set_input_size()) fn to vit, that would allow this to work, I don't think I can rely on the @ pretrained load resize that's already there for timm weights