-
Hi,
I would like to ask a question. When I run coloc.abf using coloc version 5.2.3. My input data is as below.
type position snp gARE pvalues beta n
quant 1592964 chr1_1592964_C_T_b38 chr1:169227…
-
```
Some parameters are on the meta device device because they were offloaded to the cpu.
Quantizing weights: 0%| | 0/1771 [00:00
-
52B量化版本什么时候会有呢。我用官方的https://github.com/Tele-AI/Telechat/tree/master/quant 量化会报错,用的是A10显卡
Traceback (most recent call last):
File "/*****/quant/quant.py", line 27, in
model.quantize(example…
-
We've intended to lower DL models to MLIR TOSA, and **we found MLIR does not have the full support of half-precision ops**, for example, AvgPool2dOp in TOSA only accepts fp32, int8 and int64. We've tr…
-
按照paddleslim/example/auto_compression的Readme.md进行操作,运行运行自动化压缩时报错:
```
Traceback (most recent call last):
File "/aidata/CYHan/auto_compass.py", line 42, in
ac.compress()
File "/root/ana…
-
In [README.md](https://github.com/olxgroup-oss/libvips-rust-bindings/blob/v1.7.0/README.md), libvips is noted as 8.14.5, however, the test example in which executed failed with the following error.
…
-
It seems like AutoGPTQ quantization module not being able to access the CUDA extension.
The previous ingest problem is solve by
pip install git+https://github.com/Keith-Hon/bitsandbytes-windows.g…
-
First of all, thanks for developing this excellent library!
My strategy will enter short/long position right after closing long/short position, while short/long signal happens. And each postion wil…
-
I want to use 4-bit quantized mistral model in huggingface with semantic kernel so that I can run it on google colab free tier. But I am not able to find a way to pass this configuration while creatin…
-
https://github.com/PaddlePaddle/PaddleSlim/tree/develop/example/auto_compression/ocr
该方案更换ICDAR2015数据集,采用预训练ResNet50模型(更改模型配置即可)可以成功运行,其精度基本不变,速度减少为1/4,获得Inference模型。此时的模型在转为ONNX时报错,缺少量化配置文件(cali…