-
GPU硬件 4张2080Ti,单卡显存12G,指定单卡运行。
执行chatglm-6b微调
`CUDA_VISIBLE_DEVICES=0 python train_qlora.py \
--train_args_json chatGLM_6B_QLoRA.json \
--model_name_or_path /data/chatglm-6b \
--train_data_path d…
-
See 4 pixel gif here using code below:
![test](https://cloud.githubusercontent.com/assets/489459/9022787/fe1a309a-387a-11e5-9d56-85533af1bf7e.gif)
And what it should look like.
![test](https://cloud.g…
-
请问在哪里可以查找到目前rk支持的所有算子?
目前尝试将efficientVit-sam(encoder-decoder架构)移植到rknn平台上,官方训练好的torch模型可以导出onnx模型,目前想将onnx转换为rknn模型,其中涉及到算子是否支持等问题,以下是转换encoder的代码:
```
from __future__ import absolute_import, print…
-
I merged a mistal 8x7b model with the lora adapter, and I save the .pt with torch.save(model.state_dict(), 'path_to_model.pt')
However, when I use vllm to inference on the new merged model, I fai…
-
Hi, thaks for your great work!
but i have some problem in this work
This is my device
GPU:GTX1080Ti
CUDA : 10.2
cudnn :8.2
TensorRT:7.1.3.4
![image](https://user-images.githubusercontent.co…
-
Hey @Brikwerk, thanks for posting this -- it's such an awesome idea. This technically isn't a problem with your code, but rather something I'm probably experiencing with PyGame. Nonetheless I thought …
-
**Describe the bug**
I am using the 4 bit post init quantization approach. I was hopping it would make inference faster in addition to saving memory.
But it is not the case.
**To Reproduce**
Qua…
Epliz updated
11 months ago
-
### Your current environment
```text
安装完之后,直接运行python examples/minicpmv_example.py出现的问题
INFO 06-27 10:16:32 utils.py:598] Found nccl from environment variable VLLM_NCCL_SO_PATH=/usr/local/lib/pytho…
-
```
I'm using metadata-extractor (java app in attachments), but it doesn't get ISO
Speed from Exif data for my photos taken with Nikon D90.
Expected output:
http://www.flickr.com/photos/mariooshinne…
-
```
I'm using metadata-extractor (java app in attachments), but it doesn't get ISO
Speed from Exif data for my photos taken with Nikon D90.
Expected output:
http://www.flickr.com/photos/mariooshinne…