Open sdwfzczck opened 7 months ago
Hello @sdwfzczck - I think this is a limitation of segment-anything itself? The paper says they trained on up to 1024x1024
.
Now, the error you report can potentially be helped by turning off this optimization and update the code to use mask_to_rle_pytorch
instead of mask_to_rle_pytorch_2
. If this resolves the issue, I can add a flag to disable this at runtime. But I think even if it did work, I'm surprised you get this far since the model itself seems to be limited to 1024x1024?
When I use the "vit_h" model to infer anything images, once the image size is too large, such as 1500*2000, I get an error saying it exceeds INT_MAX. However, when I reduce the image size, the error no longer occurs. How can I solve this issue?
Looking forward to the author's response, thank you!