-
Hi, appreciate to your wonderful work!
May I know the running time of your method, especially the one running on ProntoQA? Currently I'm running Llama-2-70b-chat-hf via vLLM (I deploy the model on …
-
hi,
thank you so much for this wonderful framework,however, i get this error message , i'm using many ggml with pyllamacpp , i have AMD Ryzen 7 3700X 8-Core , and 32Go of ram
Text generation r…
-
### What happened?
Randomly in logs, with `litellm==1.48.2`, a LiteLLM error will show up:
```none
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you …
-
We have a framework that generates prompts on the fly (https://github.com/microsoft/genaiscript) which means that the prompt files built on the fly and sent to the LLM (This works great with the custo…
-
Since a rating is linked to a category of products, and the selection of which category a product lives in is mostly subjective (is AirTable a Database or a Spreadsheet? Is Grammarly an AI Copilot or …
-
I met a question when using multi-adapter. It works with loading different PEFT adapter and call it by the adapter_name/ adapter_id. However, can i call the Vanilla llm? For example, I deploy Llama2 w…
-
I am facing the below issue when trying to use Azure OpenAI service.
When using the below code I am getting the following error:
`import guidance
llm_azure = guidance.llms.OpenAI(
"gpt-3.…
-
### Describe the issue
Hi,
As part of autogen studio, is there a support for streaming response to the web client, than flushing the response towards the end. Any suggestions or guidelines on ho…
-
### Issue
As the session grows longer, history that is actually being sent to the model with each inference becomes more and more compressed. Often I have situations where I go in circles for a whi…
-
- [ ] [RichardAragon/MultiAgentLLM](https://github.com/richardaragon/multiagentllm)
# RichardAragon/MultiAgentLLM
**DESCRIPTION:** "Multi Agent Language Learning Machine (Multi Agent LLM)
(Update)…