-
### Is your feature request related to a problem? Please describe.
AutoGen is too slow using OpenAi API and needs the speed of Groq LLM API
### Describe the solution you'd like
I would like to be a…
-
Awesome work! Can you make the node support Groq? They use openai compatible API and serve Llama3, mixtral, gemma models and its free for the time being.
https://groq.com/
Thank you!
-
```
import os
import yaml
from loguru import logger
from langchain_core.language_models.chat_models import BaseChatModel
from langchain_community.chat_models import ChatLiteLLMRouter
import li…
-
To expand project's functionality, I propose adding support for models from Groq and OpenRouter. This will allow us to utilize additional data processing capabilities and improve performance in specif…
-
哪怕翻译质量一般,但是他快啊,用命令行不就是图个快吗,要质量我肯定用GPT啊
等那个鬼fanyi工具请教groq的功夫我为什么不开GPT
感谢作者提供这么好的玩意,来自被fanyi恶心到的fanyi老用户
-
[Groq](https://groq.com/) has tremendous inference speeds (280 tokens per second for Llama 3 70B and 877 tokens per second for Llama 8B0. It would be amazing to get support for this in Jupyter AI.
-
- 只能使用GROQ_API_KEY, 能否支持使用其它API?
Sadwy updated
3 weeks ago
-
I'm trying to use other models besides openai in your example code but I'm just getting a response back "invalid model". I've only tried with anthropic and perplexity so haven't set any of the other m…
-
**Describe the bug**
Right now our groq instrumentor isn't properly capturing input values from groq chat completion calls. See the screenshot below
**To Reproduce**
Trace any call to groq's chat…
-
**Describe the bug**
**问题描述**
Tried to configure Llama 3.1 models on this extension using Groq and API key, following the Groq documentation.
**To Reproduce**
**如何复现**
Steps to reproduce the be…