triton-inference-server / dali_backend

The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
https://docs.nvidia.com/deeplearning/dali/user-guide/docs/index.html
MIT License
121 stars 28 forks source link

inception_ensemble example uses wrong preprocessor #114

Closed philnguyenresson closed 2 years ago

philnguyenresson commented 2 years ago

The inception_ensemble example uses channel-wise mean/std normalization, whereas I believe the source inception model just rescales to [-1,1] see here

JanuszL commented 2 years ago

Hi @philnguyenresson,

I think there are many flavors of the data processing for the inception network. It very much depends on the model you have. For example Pytorch implementation uses channel-wise normalization. Also, this is just an example. You should adjust it to the data processing applied during the training of your models.

philnguyenresson commented 2 years ago

Sure, I will make modifications as I need them for my own code. I'm just talking about the model given in the example "inception_v3_2016_08_28_frozen", which I believe was trained with [-1,1] normalization