Open sravanthOppo27 opened 1 year ago
Hi,
Thanks for the interest. The reduction happens when you convert model to OpenVINO representation.
You can compare yourself:
Okay , Your code is direct implementation of OpenVINO ?
or better than OpenVINO ( mean lesser model size & Faster inference speed up )
Actually, the training in the script is done using PyTorch and after the training process finishes the model is exported to OpenVINO representation (.xml and .bin files) and model size is reduces.
Dear Author , I have tried your code but there isn't any model size reduction for the UNET Part . Is this only Inference speed up mechanism or is there any model size reduction part ??. Am I doing any thing wrong ?