-
### Describe the feature would like to see added to OpenZFS
Switch to https://github.com/zlib-ng/zlib-ng codebase for zlib
### How will this feature improve OpenZFS?
The existing zlib code is v…
-
By using [pytorch-quantization](https://docs.nvidia.com/deeplearning/tensorrt/pytorch-quantization-toolkit/docs/index.html) i was able to create TensorRT engine models that are (almost) fully int8 and…
-
**System information**
- TensorFlow version (you are using): nightly
- Are you willing to contribute it (Yes/No): Yes
**Motivation**
We would like to run QAT for transformer models.
**Des…
-
Hi, I'm having some issues converting a model when using "int8" as the target type. This is the error I get when run the model with tensorflow after conversion:
```python
_main()
File ".../te…
-
I can export onnx model after quant_sim() in aimet_torch 1.27, but can not export the model in aimet_torch 1.28,
When I export the model in 1.28
```
quant_sim = QuantizationSimModel(
…
-
# ONNXRuntime TRT
```
docker build -f Dockerfile.manylinux2014_cuda11_4_tensorrt8_2 --network=host --build-arg POLICY=manylinux2014 --build-arg PLATFORM=x86_64 --build-arg DEVTOOLSET_ROOTPATH=/opt…
-
Current version only supports ISO2 codes. ISO3 support could be nice as well
-
I am on Ubuntu 21.04 with the GCC 11.1-v3 toolchain (gcc version 11.1.0 (Ubuntu 11.1.0-1ubuntu1~21.04+v3).
During the compilation I get the following issues during the compilation stage:
LTO …
ms178 updated
3 years ago
-
## Issue
When running a freestyle build, this plugin provides a field titled "Comment for triggering a build" that consumes the notes in a merge request and, if they match the provided regex, starts …
-
## Description
default calib is MaxCalibrator, so this code will never use in QAT?
https://github.com/NVIDIA/TensorRT/blob/4575799a91f67c060cd34212a9a27d0264460071/tools/pytorch-quantization/example…