dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.74k stars 2.97k forks source link

custom caffe model convert to tensorrt issues #1109

Closed SokPhanith closed 1 year ago

SokPhanith commented 3 years ago

Hello @dusty-nv , I train on my own custom data-set Image Classification 4 classes with caffe. I copy from path : jetson-inference/build/aarch64/bin/networks/ResNet-18/ file below: 1- first train :

I build tensorrt and run inference working good. but 2- second train :

I want to ask you when tensorrt Image Classification inference with caffe model format transform_param default look like ? if I have a mean.binaryproto file compute by caffe tool how can I set path for inference. I really not sure for it when tensorrt inference need or not. By the way 2 training I test on my validation set got accuracy around 98% the same.

dusty-nv commented 3 years ago

Hi @SokPhanith, jetson-inference doesn't support loading the mean.binaryproto file - if needed, you can change the average mean pixel value here: https://github.com/dusty-nv/jetson-inference/blob/19ed62150b3e9499bad2ed6be1960dd38002bb7d/c/imageNet.cpp#L449

However this does not seem necessary if you are still getting 98% accuracy

SokPhanith commented 3 years ago

Thank you @dusty-nv. Sorry, Can you give me some example add more code to load mean.binaryproto file ?

dusty-nv commented 3 years ago

Sorry, I don't have the code for doing that or using the per-pixel mean data in the preprocessing

SokPhanith commented 3 years ago

Thank you.