-
I don't understand to set the chat_llm to ollama, if there is no preparation for utility_llm and/or embedding_llm to set it to local (ollama) pendants. Yes, I assume that prompting will be a challenge…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
I'm executing following line of code:
```
new_index.storage_context.persist(pers…
-
The package complains about "torch" not being installed when it is most definitely installed.
(.env) chris@localhost:~$ pip install flash_attn
Collecting flash_attn
Using cached fla…
-
hey,
thanks for providing the torchtune framework,
I have an issue with a timeout on saving a checkpoint for Llama 3.1 70B LoRa on multiple GPUs,
I am tuning on an AWS EC2 with 8xV100 GPUs…
-
Requires extensive automated and manual testing and code changes (imports) which are part of https://github.com/Chainlit/chainlit/pull/1267.
-
(unfortunately you will need a physical Pixel 8 or above to implement this)
Many Commons contributors contribute in various languages, for instance in Urdu when posting a picture of a local dish th…
-
### Validations
- [ ] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [ ] I'm not able to find an [open issue](https://githu…
-
## ❓ InternalError when running llava model
Im new to mlc-llm and I'm not sure if this is a bug or me doing something incorrectly. I have so far not managed to run any model successfully. I have tr…
plufz updated
1 month ago
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
Command: python -m main interactive /mistral-7B-v0.1/
Error:
Prompt: Hello
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/mistral/lib/python3.10/runpy.py", line 196, in _ru…