-
Open Source:
LLama3
Mistral
Cohere Command+
Closed Source:
GPT
Sonnet 3.5
-
**Describe the bug**
Once ragas is installed I want to import it and I got an error on the import of pydantic output parser from langchain.
Ragas version: 0.1.6
Python version: 3.10
LangChain vers…
-
```
model ,err := openai.New(openai.WithToken(openapikey.OpenApikey), openai.WithModel("gpt-3.5-turbo-instruct"))
if err != nil {
log.Fatal(err)
}
completion, err := llms.Generat…
-
### Your current environment
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC ve…
-
Add filtering of the LLMs, for installations with more than a dozen llms.
-
Nice work! I have a question, why should we concat these two features(LLMs and CLIP) instead of just using LLMs' features, as some other works have done: https://github.com/Kwai-Kolors/Kolors
-
### Describe your use-case.
There are multiple simple models used in this repository: Blip, Clip and WD-taggers. However, when it comes to detailed description, they are all dwarfed by modern multi…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the [LangGraph](https://langchain-ai.github.io/langgraph/)/LangChain documentation with the integrat…
-
To aid in the design for both of these:
- #331
- #556
I'm going to gather a bunch of examples of how different LLMs accept multi-modal inputs. I'm particularly interested in the following:
- …
-
Request to Add Multimodal LLMs in unsloth
Rivising my Previous Issue: https://github.com/unslothai/unsloth/issues/376