-
I would like to finetune CodeLlama-13b in a memory efficient way.
I was able to do it with CodeLlama-7b, but failing with 13b.
I can't load the model `unsloth/codellama-13b-bnb-4bit`:
```pyth…
-
The codellama model should have a more precise name. I just checked on my computer and here is visual proof that it was codellama:7b:
![image](https://github.com/haesleinhuepf/human-eval-bia/assets…
-
https://brandolosaria.medium.com/setting-up-metaais-code-llama-34b-instruct-model-fc009aa937f6
https://github.com/go-skynet/LocalAI
-
As an app developer who wants to add the AI Assist feature via VZCode, I want to use CodeLlama, so that I'm not locked into OpenAI.
See https://replicate.com/meta/codellama-34b/api?tab=node
```j…
-
Currently, we do not have prompt style for CodeLlama model(s).
Example:
* Quantized models on [TheBloke/CodeLlama-70B-Instruct-GGUF][0]
* Prompt template
```
Source: system
{system_message…
-
Suggestion is not being displayed when using CodeLlama. This is not the case with Starcoder, it shows the suggestion in the line that's triggered from.
Here are the attempts:
#### Requesting fro…
-
I want to know how to do the RAG with codellama like **codellama/CodeLlama-7b-hf**,
what changes needed for that, any change in tokenizer?
Please help.
-
### Describe the bug
When i use xinference to run [codellama-70b-instruct](https://huggingface.co/codellama/CodeLlama-70b-hf/tree/main). It output a set of unrelated text.
Just like below:
![im…
-
Here is one piece of code In the file of mergekit/mergekit/moe/qwen.py
`for model_ref in (
[config.base_model]
+ [e.source_model for e in config.experts]
+ [e…
-
When I try to run python script, I get this error :
```
ypeError Traceback (most recent call last)
Cell In[6], line 10
6 max_batch_size = 4
7 max_gen_l…