isarsoft / yolov4-triton-tensorrt

This repository deploys YOLOv4 as an optimized TensorRT engine to Triton Inference Server
http://www.isarsoft.com
Other
276 stars 63 forks source link

[QUESTIONS] .wts file and plugin file #60

Closed devaale closed 2 years ago

devaale commented 2 years ago

I was wondering do I need to use my own generated .wts file (from .weights file) when generating plugins lib. ? (liblayerplugin,so) This step here:

image if I'm going to create engine separately using this repo: pytorch-YOLOv4 ? Basically am I okay using .wts from google drive or do I need to convert my own .weights / .cfg files to .wts ? - Will it affect results ?

Additionally wanted to ask: when generating liblayerplugin.so should I use my yolov4.cfg file and edit code parts to match my data ? For example networks/yolov4.h file:

    // stuff we know about the network and the input/output blobs
    static const int INPUT_H = 608;
    static const int INPUT_W = 608;
    static const int CLASS_NUM = 80;
philipp-schmidt commented 2 years ago

You don't need the .wts to compile the layer plugin, but you need to load some weights if you want to create the yolov4 tensorrt engine. You can use the .wts file that we provide in Google Drive for the default yolov4 weights which you would get if you dowload them from darknet (the repo yolov4 comes from and was implemented in). Or you can use your own .wts file from your own training. If your training involved changing any of the network parameters you might have to change the values in the code as well, yes. So especially the lines that you highlighted and maybe also the yolo layer anchors if you changed them.

philipp-schmidt commented 2 years ago

If you post more details (e.g. your network config) we might be able to help you.

devaale commented 2 years ago

@philipp-schmidt Sorry for late response. Would appreciate the help with figuring this out. Link to my .cfg file: https://drive.google.com/drive/folders/1A65yRs2Lbw7n6LYeC_mps9ZMO9gRuVlK?usp=sharing