stevebottos / owl-vit-object-detection

object detection based on owl-vit
MIT License
52 stars 9 forks source link

owlvit-large model and owlv2 model finetune #20

Open 1633232731 opened 5 months ago

1633232731 commented 5 months ago

Thanks for your amazing work~

For some reason, I want to try larger model (owlvit-large model) or latest model (owlv2) to fit my tasks, however when I modified function "load_model" to change model, it reported an error like below:

Traceback (most recent call last):
  File "/mnt/sdb/zhangxichen/project/my-aircraft/model/owl-vit-object-detection/main.py", line 90, in <module>
    all_pred_boxes, pred_classes, pred_sims, _ = model(image)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/sdb/zhangxichen/project/my-aircraft/model/owl-vit-object-detection/src/models.py", line 104, in forward
    feature_map = self.image_embedder(image)
  File "/mnt/sdb/zhangxichen/project/my-aircraft/model/owl-vit-object-detection/src/models.py", line 79, in image_embedder
    vision_outputs = self.backbone(pixel_values=pixel_values)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/transformers/models/owlvit/modeling_owlvit.py", line 923, in forward
    hidden_states = self.embeddings(pixel_values)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/mnt/sdb/zhangxichen/anaconda3/envs/my_aircraft/lib/python3.10/site-packages/transformers/models/owlvit/modeling_owlvit.py", line 301, in forward
    embeddings = embeddings + self.position_embedding(self.position_ids)
RuntimeError: The size of tensor a (2917) must match the size of tensor b (3601) at non-singleton dimension 1

It seems like

vision_outputs = self.backbone(pixel_values=pixel_values)

has an dimension dismatch error, how can I fix it?

Thanks for your help~

stevebottos commented 5 months ago

Hey, sorry about the delay here. As you can probably tell I've been away from this project for a while. I didn't design it initially with the idea of changing the backbones in mind. I'm thinking about revisiting this repo and cleaning things up. Have you managed to solve the issue?

MattLiutt commented 4 months ago

Hi, thanks for the work! I also change a bit on load_model script, and it turns out TypeError: Owlv2ForObjectDetection.forward() got an unexpected keyword argument 'images'. I managed to solved the issue but facing similar issue as you.

  File "C:\Users\matth\miniconda3\envs\owl\lib\site-packages\transformers\models\owlv2\modeling_owlv2.py", line 1340, in compute_box_bias
    box_coordinates = self.normalize_grid_corner_coordinates(num_patches)
  File "C:\Users\matth\miniconda3\envs\owl\lib\site-packages\transformers\models\owlv2\modeling_owlv2.py", line 1307, in normalize_grid_corner_coordinates
    x_coordinates = torch.arange(1, num_patches + 1, dtype=torch.float32)
TypeError: arange() received an invalid combination of arguments - got (int, Tensor, dtype=torch.dtype), but expected one of:
 * (Number end, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
 * (Number start, Number end, *, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
 * (Number start, Number end, Number step, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)

Do you have any updates on this project? @1633232731