-
# :grey_question: About
[`codellama` has just been released with it 70B version](https://twitter.com/ollama/status/1752034262101205450)
![image](https://github.com/ollama/ollama/assets/5235127/b…
-
### The model to consider.
Hi. Could you add support to [mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)?
Thanks!
### The closest model vllm already supports.
…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
### Describe the bug
I have launched a BentomlServer with a vllm backend on k8s.
Once the model is loaded (codellama 13B instruct in float 16), the logs of the pod are the following :
[INFO] …
-
The CodeQwen 1.5 Model supports Fill-in-the-middle (https://github.com/QwenLM/CodeQwen1.5?tab=readme-ov-file#2-file-level-code-completion-fill-in-the-middle) therefore I was hoping to use the `/infill…
-
- [ ] [README.md · PipableAI/pip-sql-1.3b at main](https://huggingface.co/PipableAI/pip-sql-1.3b/blob/main/README.md?code=true)
# README.md · PipableAI/pip-sql-1.3b at main
**DESCRIPTION:**
- licen…
-
Type: **Bug**
I am running codellama and llama3 locally.
I am chatting in Cody AI with llama3 and at certain point of time I cannot chat more - see screenshots below, there is an explanation on one o…
-
first, install generator like that (npm required):
```bash
npm install @openapitools/openapi-generator-cli -g
```
then execute:
```bash
openapi-generator-cli generate -i api.yml -g rust -o…
-
### What is your question?
After reading the documentation, I am still not clear how to get ollama working.
I've tried running this: ` fabric --pattern explain_code --model codellama:latest
Error:…
-
Hello,
I want to finetune the **Codellama 13B** model using **llama.cpp** and finetune is working fine on my 2 GPUs (both are RTX 4090) but fine-tuning is very slow.
How can I do fast finetunein…