-
When running incognito, why do I get groq.RateLimitError?
groq.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for model `llama3-70b-8192` in organization `...` on token…
-
I dont think it makes sense to support tons of other models, but since groq is 10x faster and has free api keys to mixtral at 30 reqs a minute, I think worth it :)
https://groq.com/
-
-
Please add support for two new LLM providers
- Perplexity : https://docs.perplexity.ai/reference/post_chat_completions
- Groq: https://console.groq.com/docs/quickstart
-
This is most likely a bug in Langchain4J...
-
Every search via groq cloud is failing with the error below, other searches via OpenAI works.
![image](https://github.com/rashadphz/farfalle/assets/5037273/8d78ea3a-fdd7-44a9-9c20-8af5ba42d9fe)
-
export const keys = {
groq: '',
ollama: 'http://localhost:11434/api/chat',
openai: ''
};
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…
-
-
`client.embeddings.create(input="Hello", model="llama3-8b-8192", encoding_format="float")`
Results in a timeout error
```
Traceback (most recent call last):
File "/Users/djokester/anaconda3…