Open CrapbagMo opened 1 year ago
Hi, this is because for inference, we need to provide this positive_map_label_to_token field, which basically specifies from which token positions we want to predict boxes. This field is different for different prompts and dealing with multiple positive_map_label_to_token was a bit cumbersome so we used only batch_size = 1.
Hi, this is because for inference, we need to provide this positive_map_label_to_token field, which basically specifies from which token positions we want to predict boxes. This field is different for different prompts and dealing with multiple positive_map_label_to_token was a bit cumbersome so we used only batch_size = 1.
I want to use two GPUs to run inference, Can I set the image per batch to 2, and then set the world_size to 2, so that the image per batch on each GPU is still 1 ?
I have tried it and it works. It would be even better if the batch size could be increased during grounding.
hi, thanks for your great work. In
inferece.py
,can you please explain why # Let's just use one image per batch ? Can i use a bigger batch size?