-
Hello everyone, i have 4 gpus RTX 3080 with 10 GiB each and im trying to fine tune mistral 7B v2.0 localy, i tried to optimize as much as i can...(Accelerate with DeepSpeed, 4bit quantization, LoRa an…
-
Hi all,
I downloaded the op files to the local files(named sdemodels_op_diy), and occurred an error when i import sdemodels_op_diy . I found the crucial reason is that the upfirn2d.py file used t…
-
### What happened?
Nothing output after "Vulkan0: PowerVR B-Series BXE-2-32 (PowerVR B-Series Vulkan Driver) | uma: 1 | fp16: 1 | warp size: 1"
It is on a RISC-V board with an imagination igpu
…
-
### 🐛 Describe the bug
[isin()](https://pytorch.org/docs/stable/generated/torch.isin.html):
- A scalar(e.g. `3`, `4`, `5`, etc) with `elements` or `test_elements` argument doesn't work.
-
Hello,
I meet a conflict when I want to install cell2cell. Im not able to import it because during the import the module name import lib.metadata is not find. I tried to install this module on th…
-
### 请提出你的问题
运行cls = Taskflow("zero_shot_text_classification",schema=schema) cls ('很好') 报错
报以下错误,输入其他文本也是:
```sh
[2024-05-31 17:11:56,645] [ ERROR] app.py:828 - Exception on /getText [POST]
Tra…
-
### Describe the issue
I am trying to run the "meta-llama/llama-2-7b-chat-hf" with the llm-on-ray framework, however I am getting the following output:
```
(ServeReplica:router:PredictorDeploymen…
-
v 0.6.1
```bash
python quantize.py --model_dir ./hg_weight_3999/ --dtype float16 --qformat int4_awq --export_path ./quantized_int4-awq --calib_size 32
```
```log
Using pad_token, but it is not se…
-
Hello,
great work you've done here! I wanted to try it out myself locally, but I'm having problems running this model using cog. More specifically, I'm getting `RuntimeError: The size of tensor a (70…
-
I can't get this submodule to build. With regular Gaussian splatting it is no problem, but I'm running into C++ compile issues. What could be the problem?