Open jtsanborn1 opened 2 months ago
Do you mean batching the images? Or running inference on each image sequentially?
I mean, applying the inference to a whole folder of images. This is mostly for video, i dont care if this not temporal consistent.
Okay, I will implement this feature today.
Great!!! Thank you!!!!!!!!1
Also, is it possible to run inference at higher resolution? I've tried modifying the values from 1024 to 2048 or higher but it gives me shapes error... I'm not any good at coding, have zero clue on this.
Resize((2048, 2048)), ToTensor(), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
I'm really happy with your tensor conversion man, it is so fast. Thank you!
Hello, why do you want to run inference at 2048 resolution? BiRefNet is trained at a 1024 resolution, so running inference at 1024 will give you better results and also use less memory.
I know it's trained at 1024 px but sometimes we need higher resolution input to avoid upscaling artifacts and blocky looking edges. Same thing happened with depth anything, it was retrained at 518px but if you run it a higher resolution the output map is sharper which for video production is great.
Don't worry about memory, high vram gpu here.
Thanks
Hi, the feature to run inference on an entire folder is now supported. Please check the README for more details.
Oh man! I just tested and it works like a charm! Thank you thank you! What about running it at higher resolution as I asked above? is it possible?
Thanks
Hi, resizing to 2048x2048 for inference is theoretically feasible. However, the ONNX model needs to be reconverted, which will undoubtedly take some time. For more details, please see here
Hi! Thanks for making this tensorrt conversion, it is really fast!!
Is there a way to run the inference on image sequence instead of single frame?
Thanks!