-
https://github.com/VeriSilicon/tvm/blob/vsi_npu/tests/python/contrib/test_vsi_npu/test_vsi_pytorch_model_all.py
I built and cross-compiled for [Khadas VIM3 pro](https://www.khadas.com/vim3) through…
-
Hi Team Brevitas,
I trying a simple toy model to check how the exported onnx model with QOps looks like. As per the [ONNX_export_tutorial.ipynb](https://github.com/Xilinx/brevitas/blob/master/noteb…
-
After calling quantsim.export(path, filename_prefix), I could not get int8 QNN ONNX model. My objective is to get an int8 ONNX model through aimet quant toolkit, which shows like the attached image be…
-
Hello, this is my first time using TVM. An error appears when running realy.build. Can you suggest a solution?
The error is as follows
tvm.error.InternalError: Traceback (most recent call la…
-
### Describe the feature request
Can I use Microsoft.AI.MachineLearning(WinML) interfaces to enable OnnxRuntime-QNN-EP(Qualcomm NPU) or other no-DirectX EP ?
1. From "windows-ml/get-started", we…
-
## Versions
- PYNQ Z1: v3.0.1
- FINN: v0.9
- Xilinx tools: 2022.2
- Ununtu: 20.04
## Commit hash
commit e76f20d1d8d05f2d8ddb52ade0f991915672622b (HEAD -> dev, origin/dev)
Merge: a3b6a7fb 3…
-
I would like to train the network with my own quantizer, it is possible to deploy such a model onto the board?
The weights of the network are quantized in low precision but stored in fp32. All the mo…
-
For example:
When the input tensor with shape [1, 256, 56, 56], weight tensor with shape [256, 8, 3, 3], weight scale with shape [256], group is 32, then this can be failed because of this check: […
-
-
### Describe the issue
When trying to quantize a Yolov8 model (exported with `yolo export model=yolov8x.pt format=onnx`) with `onnxruntime`, I get the following error:
```
$ python quantize.py yo…
Jamil updated
1 month ago