yuanyang1991 / birefnet_tensorrt

BiRefNet Inference using tensorrt
3 stars 0 forks source link

Image Sequence Inference #1

Open jtsanborn1 opened 2 months ago

jtsanborn1 commented 2 months ago

Hi! Thanks for making this tensorrt conversion, it is really fast!!

Is there a way to run the inference on image sequence instead of single frame?

Thanks!

yuanyang1991 commented 2 months ago

Do you mean batching the images? Or running inference on each image sequentially?

jtsanborn1 commented 2 months ago

I mean, applying the inference to a whole folder of images. This is mostly for video, i dont care if this not temporal consistent.

yuanyang1991 commented 2 months ago

Okay, I will implement this feature today.

jtsanborn1 commented 2 months ago

Great!!! Thank you!!!!!!!!1

jtsanborn1 commented 2 months ago

Also, is it possible to run inference at higher resolution? I've tried modifying the values from 1024 to 2048 or higher but it gives me shapes error... I'm not any good at coding, have zero clue on this.

Resize((2048, 2048)), ToTensor(), Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])

I'm really happy with your tensor conversion man, it is so fast. Thank you!

yuanyang1991 commented 2 months ago

Hello, why do you want to run inference at 2048 resolution? BiRefNet is trained at a 1024 resolution, so running inference at 1024 will give you better results and also use less memory.

jtsanborn1 commented 2 months ago

I know it's trained at 1024 px but sometimes we need higher resolution input to avoid upscaling artifacts and blocky looking edges. Same thing happened with depth anything, it was retrained at 518px but if you run it a higher resolution the output map is sharper which for video production is great.

Don't worry about memory, high vram gpu here.

Thanks

yuanyang1991 commented 2 months ago

Hi, the feature to run inference on an entire folder is now supported. Please check the README for more details.

jtsanborn1 commented 2 months ago

Oh man! I just tested and it works like a charm! Thank you thank you! What about running it at higher resolution as I asked above? is it possible?

Thanks

yuanyang1991 commented 2 months ago

Hi, resizing to 2048x2048 for inference is theoretically feasible. However, the ONNX model needs to be reconverted, which will undoubtedly take some time. For more details, please see here