-
Ollama support (or even better, LiteLLM) would be fantastic
_Originally posted by @lucacri in https://github.com/saoudrizwan/claude-dev/issues/76#issuecomment-2285353028_
…
-
I have some issues to get the response as a stream. While the "normal" request works really fine, I get an error response, when I set 'stream' => true.
This is how I to use the streaming response. …
-
总是提示连接错误,
`Error: 400 invalid preamble: Request with a GET or HEAD method cannot have a body.`
网络一切顺通的,api地址也能访问的
-
Thank you for this incredible extension.
I sometimes have the problem that an API request hangs (may be due to rate limit or unreliable internet connection) and there seems no way to cancel it manu…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
WARNING 10-27 12:26:23 cuda.py:22] You are using a deprecated `pynvml` package.…
-
The spanner router wastes most of its runtime on creating debug strings which are not used in production.
## Explanation
![Screenshot from 2024-06-11 19-38-14](https://github.com/codekeyz/pharaoh…
-
### Bug Description
I'm very new with langflow and i was interested to leverage vllm with langflow via the openapi
I tired with a few llava models without success
langflow does not recognize the na…
-
As a user, I would like to be able to use the OpenRouter API for making LLM requests.
As a user, when selecting the API endpoint from the drop down box, I should be able to select 'OpenRouter' as a…
-
I started to get this error more frequently after I update to v1.0.85
429 {"type":"error","error":{"type":"rate_limit_error","message":"Number of request tokens has exceeded your per-minute rate li…
-
### Have you searched for similar requests?
Yes
### Is your feature request related to a problem? If so, please describe.
_No response_
### Describe the solution you'd like
Currently, when OpenRo…