isarsoft / yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
http://www.isarsoft.com
Other
276 stars 63 forks source link

How to run multiple Yolov4 models with different anchors? #33

Closed ming1144 closed 3 years ago

ming1144 commented 3 years ago

As title I have two yolov4 models One is for full image and the other for subImage Data flow is: Input Image -> Model1 -> sub Images -> Model2 -> Results These two models were trained by different datasets so they have different anchors

My Problem is: How should I do? It seems that anchors are hard-coded in YoloLayer Should I create different YoloLayer for different models?

philipp-schmidt commented 3 years ago

Hi @ming1144 I have updated the plugin code. It does now use PluginFields in which we can parameterize anchors and everything. The plugin can now be used universally between different networks. You can have a look at e.g. networks/yolov4tiny.h as an example.

philipp-schmidt commented 3 years ago

You only have to load the plugin once now and can use it with multiple network definitions.

ming1144 commented 3 years ago

Thanks you philipp,

I've read code and still have some questions.

If I have two models with different config. I have to modify config in header to create engine for each models. And load .so file only one times?

philipp-schmidt commented 3 years ago

Yes. Load/compile plugin once, build as many engines as you need by changing the network config in the header, compile, run executable and supply weights, copy .engine(s) to triton