Closed laxmimerit closed 9 months ago
Used Sampling method for solution
inputs = processor(images=image, return_tensors="pt").to(device, torch.float16)
generated_ids = model.generate(**inputs, do_sample=True, top_p=0.95)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip()
print(generated_text)
Hi, I was following this notebook. https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BLIP-2/Chat_with_BLIP_2.ipynb
How can I generate multiple captions for single image?
Here is sample code snippet from the demo