-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
### 🐛 Describe the bug
Launched server with:
```bash
vllm…
-
Hi,
https://github.com/PromptEngineer48/Ollama/blob/main/2-ollama-privateGPT-chat-with-docs/privateGPT.py has a couple environment variables. Like `MODEL`. Nothing sets those variables.
So when …
-
Using manual completion (C-x) it just spams "Completion started" until I have to close nvim.
my config:
```lua
{
'maxwell-bland/cmp-ai',
config = function()
local cmp_ai = require('cmp_a…
-
Is it possible to merge multimodal LLMs?
For example, could Llava and CodeLlama be merged? It might be beneficial for some software engineering tasks
-
Hello,
Tensor assertion error is raised if you try to train the model. It starts with the following:
```bash
0%| | 0/10 [00:00
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
Hi,
Apologies if the solution is obvious but I'm new to this. When running the example infilling script:
`torchrun --nproc_per_node 1 example_infilling.py --ckpt_dir CodeLlama-7b/ --tokenizer_pat…
dv347 updated
3 months ago
-
### Anything you want to discuss about vllm.
I run into the below error when using meta-llama/CodeLlama-7b-Instruct-hf with `vllm==0.4.0, torch==2.1.2`, the code works perfectly with` vllm==0.2.1`, b…
-
CodeBLEU shows that fine tuned Codellama is behind GPT-4.
However, Codellama outperforms GPT-4 when using the compilation and execution test metrics.
This does not look consistent.
My own tests
…
-
### Is your feature request related to a problem? Please describe.
Hello!
The issue is related to the use of `Together AI` models, such as `CodeLlama-34b` and `Llama-3-70b-chat-hf`.
Despite that …