-
### Check for existing issues
- [X] Completed
### Describe the feature
I am successfully using my local ollama models using assistant panel.
I would love to be able to use them as well as an `in…
-
I've long had a [`setup`](https://github.com/kbd/setup/blob/main/HOME/bin/setup) program that makes symlinks to dotfiles, installs packages, etc. At some point I realized I could replace my code with …
-
OpenAI's completion API has a `suffix` parameter that allows one to supply the context after the completion to enable fill-in-the-middle completions:
```python
prompt = "def say_hello("
suffix = ""…
-
Is there a way to enforce structured outputs?
Like in DSPY or instructor?
Like using "response_model" as a pydantic model in the openai chat completions endpoint?
-
### Problem description
When offering a list of exclusive completions via `.sublime-completions` files, there is currently no way to prevent buffer completions from being offered. The only way to d…
-
Here are suggestions for improving the `cs` help message. The message begins as follows:
```shell
$ cs -h
Usage: /home/mslinn/.local/share/coursier/bin/.cs.aux
...
```
1. Users would not no…
-
The objective of this issue is to update the `chat/completions` endpoint to use the Xef Library depending on the JSON that comes in the request.
-
![image](https://github.com/user-attachments/assets/6f814436-b443-4714-b99b-bbe2d9fcbdc5)
-
### Your current environment
Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubu…
-
> 我发现执行的节点上限是 200 个。应该有限制。
找了好久,找到一个maxRunTimes参数默认是200。
对话过程应该是由projects/app/src/pages/api/v1/chat/completions.ts中的maxRunTimes控制的
_Originally posted by @blvyoucan in https://gi…