-
**Read README carefully first**
**Star :star: this project if you want to ask a question, no star, no answer**
already star 😁
## Question
I use yolov8 tflite model on flutter , the python code w…
-
**Edit** Originally, this issue was about a proof-of-concept for a new PyTorch backend in RETURNN.
This has somehow evolved into a whole new generic frontend API (original idea here: https://github.c…
-
### System Info
Python 3.11.5
torch 2.3.0
transformers 4.41.1
accelerate 0.30.1
```
+------------------------------------…
-
您好,在微调52B时出现了如何报错,具体是在保存模型中转化 Lora层参数时。代码卡在 TeleChat-52B/deepspeed-finetune/utils/module/lora.py -> convert_lora_to_linear_layer -> with deepspeed.zero.GatheredParameters(),使用 zero3+Lora,报错信息如下:
epoc…
-
微博内容精选
-
### System Info
```Shell
- `Accelerate` version: 0.30.0.dev0
- Platform: Linux-5.15.0-87-generic-x86_64-with-glibc2.35
- `accelerate` bash location: /home/work/.local/anaconda3/envs/multinode-test/…
-
I am sorry that I have to open this but both in the opencl github branch and the google forums dont have any kind (updated) step by step installation instructions for installing Caffe Opencl on Intel …
-
Hello everyone, I'm encountering a memory issue while fine-tuning a 7b model (such as Mistral) using a repository I found. Despite having 6 H100 GPUs at my disposal, I run into out-of-memory errors wh…
-
环境:nvidia a10 24g显存,docker:nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04,cpu:Intel® Xeon® Silver 4314×2,mem:256G
日志如下:
NCCL_DEBUG=info NCCL_IB_DISABLE=0 NCCL_NET_GDR_LEVEL=2 deepspeed --master_port 16…
-
### What is the issue?
Generating a response after first starting Ollama works flawlessly from what I can tell. I am able to change models and generate responses from prompts. After the model unloa…