-
the codellama 70B is different from 7B 14B,can codellama 70B be supported?
-
Hello!
I actually have two models - CodeLLaMa-13b-Python and CodeLLaMa-13b, that need to be merged. The overall goal is to merge two models (one trained on Python and another trained on any other lan…
-
hi,according to the tutorial, execute the following command, no error, freeze here. What should I do in the next step? If you can improve this tutorial, thank you.
`python run.py --config Implementat…
-
Consider to provide official CodeLlama inference speed up support.
-
I tried to finetune CodeLlama 7b Instruct by downloading weights through official repository
Folder Structure of Folder Containing CodeLlama Instruct Weights
![image](https://github.com/pytorch/…
-
I have created a custom model using the `ollama create custom_model -f modelfile`. The custom model is based on codellama. Some examples and context are provided in the modelfile. In the CLI interface…
-
Summary: when using ellama-code-complete in an existing buffer, only the 1st code block is added to current buffer. Any non-code text and 2nd code block are missing.
Ollama terminal output showing …
-
### What is the issue?
CLI:
```
$ ollama run codellama:34b
Error: llama runner process has terminated: signal: segmentation fault
```
Logs:
```
May 11 02:47:28 gpu ollama[27286]: time=…
-
I used awq to build the codellama-13b quantized npz model file to tensorrt format, but encountered this error. My command was as follows:
python build.py --model_dir /app/models/CodeLlama-13b-hf/ \…
-
how to use codellama instead of openai api?
how to make it use llama.cpp?
ouvaa updated
1 month ago