salesforce / BLIP

PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation
BSD 3-Clause "New" or "Revised" License
4.85k stars 648 forks source link

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same #168

Open HWH-2000 opened 1 year ago

HWH-2000 commented 1 year ago

tiny error in demo.ipynb Image-Text Matching error: image = load_demo_image(image_size=image_size,device=device) model = model.to(device='cpu')

right code: image = load_demo_image(image_size=image_size,device=device) model = model.to(device)