-
- [ ] [LLaVA/README.md at main · haotian-liu/LLaVA](https://github.com/haotian-liu/LLaVA/blob/main/README.md?plain=1)
# LLaVA/README.md at main · haotian-liu/LLaVA
## 🌋 LLaVA: Large Language and Vi…
-
**Describe the bug**
I assume there's a way to configure no call to be made to the OpenAI servers causing this error but I can't find it
**Relevant Continue Config**
```python
config = Conti…
-
I assume that the project will want to support Fill In Middle (FIM) tokenization to work with the codellama models. How will this be accomplished?
Reading the codellama paper (https://arxiv.org/abs…
-
Hello
I tried running codellama 70b using docker.io/vllm/vllm-openai:v0.2.7 docker image.
```
INFO 01-31 12:11:56 api_server.py:727] args: Namespace(host='0.0.0.0', port=8000, allow_credentials=…
-
## Environment info
- `adapters` version: 0.1.2
- `transformers` version: 4.36.2
- Platform: Linux-5.4.0-173-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0…
-
### Before submitting your bug report
- [x] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [x] I'm not able to find an [open issue]…
-
Our `litgpt chat` command is doing something weird where it exits the shell and executes the code on the terminal.
To reproduce:
```
litgpt chat --checkpoint_dir out/custom-phi-2/final
```
…
rasbt updated
5 months ago
-
### Describe the issue
Simply running a sample autogen project on the new Codellama 70b refuses to execute any code. Any tips on prompting to ensure the model is aware this is completely fine?
#…
-
config.json里的seq_length是否可以完全代表预训练时的窗口长度?72B从头开始用32k窗口训的吗?
-
First of all, I must say, what a great piece of software Ollama is! THANK YOU for all your work everyone!!!
I am trying to setup MemGPT to use CodeLlama via `ollama serve`
I've made sure that I've p…