-
I tried using Vision models like llama-3.2-90b-vision-preview, llama-3.2-11b-vision-preview, llava-v1.5-7b-4096-preview but it shows same thing as:
![image](https://github.com/user-attachments/assets…
-
Hi, I access `llama3-70b` through groq using this [python tool](https://github.com/simonw/llm). I was hoping to also use groq with gp.nvim. Any plans to support this?
-
### Bug Description
I am using Datastax langflow to create a multi-agent systems flow using crewai. The model I am using is one of the models from ChatGroq. When I use the model alone, the flow works…
-
For example, in groq, `llama-3.1-70b-versatile` is called `Llama 3.1 70B` in the dropdown menu, which could easily be confused with the other llama 70b models provided by groq. Naming it with the same…
-
### Version
v1.41.1731027960
### Describe the bug
Openrouter does not work with Cody
Added this config in settings.json:
{
"provider": "groq", // keep groq as provider
"mo…
-
Right now it uses Gemini as the LLM via Google API, change it to use an LLM from https://console.groq.com/
Try out a bunch of models (Llama 3.1, 3.2 etc) and test which gives the best results, and up…
-
When using the Groq client (AsyncGroq) in conjunction with the tracing dashboard, tool choices and other function-call-specific details are not displayed as expected. This behavior persists even after…
-
It seems that Groq doesn't like tool calls in its message history?
```r
library(elmer)
#' Get the current time
get_current_time
-
I want to run it with a GROQ LLM; what and where to change code?
Besides the python code I think also the prompts would probably be changed (src/goal_prompt.md).
Especially I would like to do ex…
-
I was trying to use Agent Zero with rate limiting options:
rate_limit_input_tokens = 6000,
rate_limit_output_tokens = 3000,
bacause I was getting the following error from Groq API:
groq.APIStatu…