-
Steps to reproduce:
1. Run a service with model mapping using `openai` format
Example:
```yaml
type: service
name: llama31-service-tgi
replicas: 1..2
scaling:
metric: rps
target: …
-
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
This is my settings file (~/.config/zed):
```json
{
"inline_completions": {
"disabled_gl…
-
**Describe the bug**
I'm using the OpenAIGenerator to access a vLLM endpoint on runpod. When using a base model like Mistral v0.3 that has not been instruction tuned and so does not have a chat templ…
-
Same as :
> Can it provide completions for common shells? I guess can be helpful. TIA!
-
github model的baseurl,不带v1,格式是:https://models.inference.ai.azure.com/chat/completions
-
Are completions supposed to be in the L assist window or in the error window ? This picture shows them just being appended to the same error window.
![image](https://github.com/user-attachments/ass…
-
```
["*"] = { escape = false, close = false, pair = "**", enabled_filetypes = {"*"}},
["/"] = { escape = false, close = false, pair = "//", enabled_filetypes = {"*"}},
```
This…
-
### Are you following the right branch?
- [X] My Nixpkgs and Home Manager versions are in sync
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Issue descript…
-
![image](https://github.com/user-attachments/assets/90fc33db-e249-4ddd-9673-a9fb4ac7443d)
that variable name is completely unrelated. i don't think completions would ever be useful here except when…
-
### Is your feature request related to a problem? Please describe.
The dotnet CLI today has pervasive completion support for commands, options, and arguments. However, our telemetry data suggests t…