-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
(I tried to create a pull request against the development branch but it failed
since I'm not a collaborator.)
For FIM auto completion requests to the LM Studio provider, the model name must be pro…
-
### What is the issue?
The Ollama on Ubuntu 22.04 can detect my Cuda GPU, and loads the model to its memory, but the processing seems to be mostly on CPU. Is this a normal behavior? The overall perf…
-
是按照README里,先下consolidated.00.pth, 再根据convert_llama_weights_to_hf.py 转为 .bin权重?
但会报:
RuntimeError: shape '[32, 2, 2, 4096]' is invalid for input of size 16777216
[meta-llama/CodeLlama-7b-hf at…
-
### Bug description
I'm using the local model (ollama with codellama) to generate a commit. It works sometimes and fails sometimes with the warning information: "⚠ No commit messages were generated."…
-
### System Info
```
Target: x86_64-unknown-linux-gnu
Cargo version: 1.75.0
Commit sha: 4ee0a0c4010b6e000f176977648aa1749339e8cb
Docker label: sha-4ee0a0c
nvidia-smi:
Tue Apr 2 17:34:07 2024 …
spew updated
5 months ago
-
I encountered an issue when using mpirun. Let me describe how I used it.
Firstly, i used the original command in the example, it worked successfully.
`mpirun -n 2 --allow-run-as-root \
python run.p…
-
Hello, just discovered your paper, this is great work!
Do you want to try your best model in solving perf-ninja puzzles?
https://github.com/dendibakh/perf-ninja
I'm the main author and I'm very cur…
-
I am unable to make any use of llm-vscode. I get nothing but error messages, first I get `Inference api error: Service Unavailable`, and then with `http error: builder error: relative URL without a ba…
-
### System Info
python 3.10.10, transformers 4.44.2, peft 0.12.0
### Who can help?
@BenjaminBossan @sayak
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tas…