-
are there any plans to port the library to torch 2? Since the parallelize() library is deprecated in torch 2, it becomes impossible to train larger models like llama 7b and mistral 7b even with A100 8…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
**Is your feature request related to a problem? Please describe.**
Mistral LLM is a language model developed by Mistral AI. It is a large language model designed to understand and generate human-like…
-
Creating this issue to keep track of Models that will be nice to have ported:
--> Fuyu : https://huggingface.co/adept/fuyu-8b
--> OpenELM
--> CLIP
--> Whisper
--> SeamlessMT: facebook/seamless-…
-
### Your current environment
```text
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: RED OS release MUROM (7.3.4) Stan…
vlsav updated
3 weeks ago
-
Hello! I had a thought. To minimize constant load for tasks that occur infrequently, is there a way to keep the Docker container running with the HTTP server, but only load the model when a query is m…
-
**Is your feature request related to a problem? Please describe.**
I am not able to generate more tokens than 1 following context-free-grammar.
**Describe the solution you'd like**
I would like t…
-
Update VisualQnA example that uses Falcon VLM.
This would require to include Falcon as part of the validation at https://github.com/opea-project/GenAIComps/tree/main/comps/llms. And then create an …
-
# Feature Description
Like Phi is supported, it would great to have this Mistral level 2b model ggufable.
# Motivation
SOTA 2b model, a piece of art, read how they made it:
https://she…
-
### Issue
Mistral Large 2 is messing up the filenames.
When using the `whole` edit format, instead of writing the filename only on the line before the code, it often writes stuff like:
```
Updat…