-
See https://ai.meta.com/blog/code-llama-large-language-model-coding/
From https://huggingface.co/TheBloke/CodeLlama-13B-Python-GGML/tree/main
```
llm llama-cpp download-model \
https://hugging…
-
https://github.com/charmbracelet/glow
I can use this as markdown formatter for seperated questions like
`ollama run phind-codellama 'show me basic python example' | glow`
`ollama run phind-codell…
-
As title says
If I've already pulled the new (as of 2024-01-30) codellama-70b from meta (or python variant)
Will Llama Coder use this?
Or does it download the 34b and run that?
Does it just run …
-
CodeBLEU shows that fine tuned Codellama is behind GPT-4.
However, Codellama outperforms GPT-4 when using the compilation and execution test metrics.
This does not look consistent.
My own tests
…
-
As said in the title, the chat feature is really missing. ie if I want the assistant to explain some code, I can't do it currently with llama coder.
-
after loaded local model codellama-7b.Q4_K_M.gguf, error reported during Q&A interaction
-
LlamaSharp config from appsettings.json
```
"LlamaSharp": {
"Interactive": true,
"ModelDir": "C:\\Models\\TheBloke\\CodeLlama-7B-GGUF",
"DefaultModel": "codellama-7b.Q5_K_M.gguf",
…
ghost updated
4 months ago
-
- Already posted on https://github.com/vllm-project/vllm/issues/1479
- My GPU is RTX 3060 with 12GB VRAM
- My target model is[CodeLlama-7B-AWQ](https://huggingface.co/TheBloke/CodeLlama-7B-AWQ), whi…
-
model:codellama/CodeLlama-7b-Python-hf
build code:
`python build.py --model_dir /docker_storage/CodeLlama-7b-Python-hf/ --dtype float16 \
--remove_input_padding --use_gpt_attention_plugin float…
-
Hello,
the fine-tuning process was done successfully, however when I try to run separate the inference by loading the code"
```
import torch
from transformers import AutoModelForCausalLM, Bits…