-
Hello,
Appreciate your work, it works amazing. I'm facing with an issue which I'd like to ask.
I can train my model on my GPU, really fast, without any problem (for my own configuration, it take…
-
有自动生成yaml里的scheme的工具吗,感觉这个也是论文中比较重要的一部分吧
-
## ❓ Question
I have been trying to use INT8 inference for a trained pytorch model.
I followed this:
https://pytorch.org/TensorRT/_notebooks/vgg-qat.html
and
https://docs.nvidia.com/deepl…
-
1. did ANE support 8bits Inference and Acceleration?
2. did coreml support free set quantized parameters? PTQ/QAT have higher precision for my model
-
I want resnet50.wts quantization to fp32, fp16, int8 at resnet50.py. How do I modify resnet50.py to do each?
-
I have a model composed of some tf.keras.layers.Conv1D and custom Upsampling1DLayer and CustomCropping1D layer:
The model produces correct results after training but when I convert it to tflite with …
-
## Bug Description
If a tensorrt calibrator implemented using TRT python API is passed to Torch-TRT compilation, it fails due to the object not being recognized.
Error :
```
============…
-
### Description
I downloaded yolov5 tflite model from tensoflow hub and used colab model compiler to convert it to edgetpu and as you can see from following information printed by edgetpu compiler al…
-
To reproduce the results as follows:
1. The script I use is "application/imagenet_example/PTQ/ptq/ptq.py"
2. The pretrained models is changed to "torchvision.models.mobilenet_v2", whose top1 is 71.7…
-
when I use PTQ of NNCF, a problem occured,
" File "D:\Software\anaconda3\envs\NNCF\lib\site-packages\onnx\__init__.py", line 40, in _save_bytes
with open(cast(str, f), 'wb') as writable:
Permi…