-
I lanch the tritonserver follow readme with codellama-7b-hf, and request through http.
```
curl -X POST localhost:8000/v2/models/ensemble/generate -d '{"text_input": "write a quick sort", "max_tok…
-
Could you offer the training code on codellama? did it trained on llamaX?
-
I have nvim/llm working with ollama, which uses llm-ls-x86_64-unknown-linux-gnu-0.5.3.
I tried to switch the config to use OpenAI API to connect to llamacpp server,
because this does support my AMD …
-
### Describe the bug
I am not able to get any reasonable input when using codellama hosted in togetherAI.
### Reproduce
```python
import os
os.environ["TOGETHERAI_API_KEY"] = "..."
from interpre…
-
I understand this might be a huggingface-related problem but I cannot find the answer anywhere so I come to ask for help.
On huggingface there is a example code for codellama model:
>>>from tran…
-
Hey Phil, thanks for putting together these tutorials.
I am trying to fine-tune CodeLlama using HF Sagemaker, but I am facing errors with the Tokenizer, I think that the provided transformers image…
-
**What problem or use case are you trying to solve?**
Currently OpenDevin somewhat works with the strongest closed LLMs such as GPT-4 or Claude Opus, but we have not confirmed good results with ope…
-
As said in the title, the chat feature is really missing. ie if I want the assistant to explain some code, I can't do it currently with llama coder.
-
Lets try to rethink our analysis methods using Huggingface transformers
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-compass/opencompass/issues/) and [Discussions](https://github.com/open-compass/opencompass/discussions) but cannot get the ex…