-
Hi @stevelinsell @paulturx
The performance when offloading PRF (Pseudorandom functions) to QAT does not seem to be improved ,
Performance is improved about 3 times when offloading asymmetric ke…
-
Setup:
- Ubuntu 24.04 amd64
- in-tree QAT
- Dual Intel(R) Xeon(R) Gold 6438Y+ (each 16 vfs / total 32)
When i set the worker_processes in the nginx.conf above `32` the qat.service reports `qatmg…
-
### 🐛 Describe the bug
I use pytorch-quantization to do QAT for a pointpillar model, it works fine during pytorch training, however, when I export the torch model to onnx, accuracy degrades badly. …
-
### What happened?
xquic调用babaSSL实现的tls1.3,babaSSL支持async_job,也就是异步SSL,这样的话把加解密等耗时操作卸载到QAT后,job能立刻返回,cpu还能处理其他请求,可以大幅提升性能。但是实际测试下来xquic不支持babaSSL的异步job。比如xquic会调用SSL_do_handshake发起握手,在发起握手前可以设置ssl的mo…
-
### Background
As reported In [#6100](https://github.com/US-EPA-CAMD/easey-ui/issues/6100) a database exception occurred when attempting to store duplicate Summary Value rows in an imported emission …
-
Hello, I'm trying to train YOLOv8-large in int4 format. I took the training recipe available at [sparsezoo](https://sparsezoo.neuralmagic.com/models/yolov8-l-coco-pruned85_quantized?hardware=deepspars…
-
## Question
环境:
centos6.6,内核: 2.6.32
qat驱动:qat1.7.l.4.7.0-00006
qat_engine:https://github.com/intel/QAT_Engine.git
在编译驱动时,提示如下错误:应该是找不到“pci_ignore_hotplug”,不知道是否越到类似的问题?
make all-am
make[1…
-
aimet version: 1.28
SNPE version: 2.14
deploy platform: SM8550 DSP w8 a8 bias32
I have a model, and the backbone of this model is MobileNetV3. As you know, MobileNetV3 primarily consists of poi…
-
I would like to know if, when using MongoDB and switching to the zstd compression algorithm, I can use this plugin to call QAT hardware?
-
I use modelopt QAT my model:
```
import modelopt.torch.quantization as mtq
# Select quantization config
config = mtq.INT8_DEFAULT_CFG
# Define forward loop for calibration
def forward_loop(model):
…