-
**Describe the solution you'd like**
The available LLMs should show LLMs from NVIDIA as well
**Additional context**
This will also require a new place to insert the NVIDIA API key
**Requires**
- [ ]…
-
Has anyone tried running it locally?
I adapted it for use with LM Studio by changing the tokenizer, LLM calls, and configurations. The connection to the API endpoint works, and persona creation is su…
-
- https://arxiv.org/pdf/2405.00492
- https://arxiv.org/pdf/2406.10279
## The relationship between Temperature and Hallucination
![Screenshot 2024-11-16 at 9 21 08 PM](https://github.com/user-atta…
-
Can I use other LLMs? I connect to other remote models via their API, then locally rebroadcast a web server that bridge any OpenAI-compatible HTTP requests to the respective model.
I can see Lumos…
-
For functional testing of features relying on LLM calls or LLM tasks, the main challenge is to be able to test with a stubbed LLM environment to be able to control the output of those calls or tasks.
…
-
- [x] Together AI: Assigned to @madhavi-peddireddy
- [x] Perplexity Assigned to @adityasingh-0803
- [x] Cohere: Assigned to @madhavi-peddireddy
- [x] AWS Bedrock: Assigned to @Ajaykumarkv17
- …
-
### Self Checks
- [X] I have searched for existing issues [search for existing issues](https://github.com/langgenius/dify/issues), including closed ones.
- [X] I confirm that I am using English to su…
-
### Is your feature request related to a problem or challenge?
LLMs provide a fantastic way to learn and use a new codebase. By providing the documentation, they can create a custom guide for new use…
-
**What problem or use case are you trying to solve?**
Not Diamond intelligently identifies which LLM is best-suited to respond to any given query. We want to implement a mechanism in OpenHands to s…
-
Given that an LLM is an evolving system, reporting the prompts might not be enough: different versions are likely to return different answers. For complete transparency a researcher, when possible, sh…