-
When I asked Claude to put today’s timestamp in a project summary file it used:
`BB rewrote file project_summary_2024_03_19_0001.txt with new contents.`
I'm guessing this is a standard LLM iss…
-
/kind feature
**Describe the solution you'd like**
This proposal outlines the need for an API to standardize the discovery, installation, and aggregation of LLM functions, agents or tools in Kub…
-
[x ] I have checked the [documentation](https://docs.ragas.io/) and related resources and couldn't resolve my bug.
**Describe the bug**
I have followed the documentation for prompt adaptation to P…
-
If the LLM returns something that is close to an instance of the output schema, and BAML users are willing to accept the latency and dollar cost of an LLM-based retry, then we should give them that op…
-
### What you would like to be added?
Inspired by this research paper [Vidur: A Large-Scale Simulation Framework For LLM Inference](https://proceedings.mlsys.org/paper_files/paper/2024/file/b74a8de47d…
-
I am writing to propose the integration of Ollama, a local large language model (LLM) solution, into the Windows Agent Arena GitHub repository. As the landscape of AI continues to evolve, there is a s…
-
@mroch @li-boxuan @jeremi @penberg @JensRoland
integrate a feature that can allow user to use multiple llm models in the project with their special expertise
for example :
when user add 3…
-
**Is the feature request related to a problem?**
Yes. Currently, Gorilla has a limited set of APIs that it can access. This limits the functionality and potential use cases for LLMs using Gorilla. Ad…
-
Testing the call:
```
litellm_completion(
model,
messages,
stream=stream,
custom_llm_provider=self._config.nearai_hub.custom_llm_provider,
…
-
### 🚀 The feature
**How Its Working Currently** Right now If I am adding same document twice its first searching memories and then updating, adding or deleting memory.
**How It Should be** There s…