-
Hello
I am a beginner-level user of PrivateGPT and set it for 'local' with LLM as mistral-7b-instruct-v0.2.Q4_K_M.gguf.
Please advise me how to add Groq (OpenAI compatible LLM service - https://…
-
### Brief Description
. add support for chat groq agent
### Rationale
1. Faster streamin response
### Suggested Implementation
**vocode/streaming/agent/groq_agent.py**
```
import logg…
-
### Expected feature
Hi.
Like I said, tested 2.0.6 and 2.1.0 this weekend, and both work great.
No problem!
Now about LLM.
ollama is very slow on VPS, no GPU.
Can you add Groq API?
http…
-
Any plans on integrating Groq? It is very fast and can save a lot of time when comparing with other options.
Groq link: https://console.groq.com/docs/quickstart
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
-
this is the game changer for the speed with llama 3.
-
If a function decorated with weave is called from a Flask endpoint, weave won't work.
CODE:
`from flask import Flask
import os
from groq import Groq
import weave
app = Flask(__name__)
@we…
-
The output from https://console.groq.com gets cut off due to the speed of inference.
![image](https://github.com/jackMort/ChatGPT.nvim/assets/39594914/9ab76b6b-e063-4cd3-add8-ecf713ff2583)
-
### Description
I am trying to interact with the LLaMA model `llama3-8b-8192` using Groq. However, I'm encountering issues with the integration, and it seems like the library might be prompting for a…
-
Hello,
does someone know a simple way to connect groq to guidance?
https://console.groq.com/docs/quickstart