NVIDIA-AI-IOT / nanoowl

A project that optimizes OWL-ViT for real-time inference with NVIDIA TensorRT.
Apache License 2.0
232 stars 42 forks source link

Can i using deepstream-app this model? #6

Open changsubi opened 10 months ago

changsubi commented 10 months ago

Can i using deepstream-app this model?

pakyurekm commented 10 months ago

please provide configuration file for the engine file model. i perpare one config file for deepstream integration [property] gpu-id=0 model-engine-file=/home/magnificent/projects/loki/yolox-detector/activity_detection/nanoowl_utils/data/owl_image_encoder_patch32.engine process-mode=2 network-mode=2 net-scale-factor=0.0146 offsets=122.77;116.75;104.094 secondary-reinfer-interval=0 gie-unique-id=2 output-blob-names=LayerNorm output-tensor-meta=1 network-type=1 operate-on-gie-id=1 operate-on-class-ids=2

at least share the output-blob-names for OwlViTVisionTransformer

angeelalg commented 10 months ago

What's new?

Have you managed to get nanoOWL working in Deepstream?