autodistill / autodistill-owl-vit

OWL-ViT module for Autodistill.
https://autodistill.com
Apache License 2.0
5 stars 2 forks source link

RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same or input should be a MKLDNN tensor and weight is a dense tensor #4

Open sudoLife opened 5 months ago

sudoLife commented 5 months ago

Hi,

While following this guide, I get the error in the title. My code:

from autodistill_owl_vit import OWLViT
from autodistill.detection import CaptionOntology
base_model = OWLViT(
    ontology=CaptionOntology(
        {
            "white strawberry in cream": "white strawberry",
            "red strawberry": "red strawberry"
        }
    )
)
result = base_model.predict("2544.jpg")
print(result)

Any ideas?

The error occurs here:

File "/home/sudolife/projects/yolo-world/venv/lib/python3.11/site-packages/torch/nn/modules/conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,

I might investigate further a little later, but the stack trace is quite deep so it'll take some time.

sudoLife commented 5 months ago

Okay, so that was because the model was put on the GPU while the same wasn't being done for the inputs. Fixed it.

ArghyaChatterjee commented 4 months ago

Still getting the issue, is there any fix for this ?? Looks like you haven't pushed the changes to this branch or updated the binaries being installed with pip install autodistill-owl-vit .

sudoLife commented 3 months ago

Still getting the issue, is there any fix for this ?? Looks like you haven't pushed the changes to this branch or updated the binaries being installed with pip install autodistill-owl-vit .

Hi, I can't push the changes to the branch because I'm not the maintainer of the project. That being said, you can apply the pull request to your installed version and it should work :)