-
Due to lack of maintenance effort, the project is currently lagging behind. The current state of the dependencies does not provide a working configuration. Since I'm focusing on other projects for the…
-
### Discord username (optional)
_No response_
### Describe the solution you'd like?
```text
To have the option to the new Warp AI to add our own API Key and maybe options to tweak the settings like…
-
Hi,
Thanks for this wonderful project.
recently you implement custom endpoint, if you able to implement custom name also we can make work all existing OpenAI compatible provider like perplexity , mi…
-
There is a new API to run the AI tasks. It is slightly different than the old one.
As Mail is using Summary, Topics and FreePrompt, it should be relatively straightforward to migrate to the taskPro…
-
When I try to use LM Studio through the reverse proxy the console just prints:
[2023-12-20 14:44:52.686] [INFO] Received GET request to /v1/models with body: {}
[2023-12-20 14:44:52.687] [INFO] Retu…
-
Currently the plugin configuration directly references ollama/lmstudio, having the option to use a OpenAI Compatible API endpoint would be more generic and hint the user that it can use any backend.
…
SanQH updated
1 month ago
-
### What happened?
I am trying to run Ollama there's no specific command I found that run Ollama private model.
There's any further guidance available with current repo?
### Relevant log output
``…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
By leveraging technologies like GPUs, Heroku can provide customers with faster and more efficient AI computing capabilities all with the Heroku DX to provide the “Heroku magic” customer experience whi…
-
I'm looking for a way to send a http request as a client to the server via post. My server only use post not get method.
Example:
curl https://....mynetworkprovider..../api/v3/events -X POST -H "…