yaroslavyaroslav / OpenAI-sublime-text

First class Sublime Text AI assistant with GPT-o1 and ollama support!
MIT License
181 stars 14 forks source link

Failure to connect to server with auth disabled, still results in error "No API token provided" #64

Open knutle opened 1 week ago

knutle commented 1 week ago

Problem: Currently package returns API request error even when server does not require authentication.

Cause: Excessive pre-request validation on token property from package settings.

Desired behavior: Provide a way to allow user to bypass this check when necessary, but still trigger the validation for legitimate cases.

Suggested fix: Allow token property to be empty or omitted from package settings if not required by provider. Either by explicitly setting "token": null, or by adding a separate property like "auth": false.


The error message is triggered by the exception raised on line 472 in the following code snippet.

https://github.com/yaroslavyaroslav/OpenAI-sublime-text/blob/be04e41555be38cadfab9f047f90a0d2aa7debe7/plugins/openai_worker.py#L468-L474


Simply ensuring that the token key exists in the plugin config and has any value of length > 10 seems to be a good workaround for now, though it looks like this should be pretty straightforward to fix.

knutle commented 1 week ago

Summary + Workaround for End-Users

[!CAUTION] There is currently a known issue which will trigger the following error even when connecting to an unsecured server.

"No API token provided, you have to set the OpenAI token into the settings to make things work."

It is highly recommended to enable authentication in most cases, but especially when self-hosting models on your local system this can be inconvenient.

[!TIP] Use the following workaround to avoid this error until a permanent solution can be released.

Simply ensure your assistant configuration defines a "token", value longer than 10 characters. It can be anything, since the server doesn't care, but must be present to prevent a validation error.

Sample config

{
    "url": "http://localhost:1234", // Url to your unsecured server
    "token": "xxxxxxxxxx", // Token can be anything so long as it is at least 10 characters long
    "assistants": [
        {
            // Inherits token from top-level, no error
            "name": "Code assistant",
            "prompt_mode": "panel",
            "chat_model": "codestral-22b-v0.1",
            "assistant_role": "You are a software developer, you develop software programs and applications using programming languages and development tools.",
            "temperature": 1,
            "max_tokens": 2048,
        },
        {
            // Overrides top-level token incorrectly, will get error
            "name": "Lazy Assistant",
            "token": "",
            "prompt_mode": "phantom",
            "chat_model": "llama-3-8b-instruct-32k-v0.1",
            "assistant_role": "You are very unhelpful.",
            "max_tokens": 4000,
        },
        {
            // Overrides top-level token correctly, no error
            "name": "General Assistant",
            "token": "abcdefghijklmn",
            "prompt_mode": "phantom",
            "chat_model": "llama-3-8b-instruct-32k-v0.1",
            "assistant_role": "You are very helpful.",
            "max_tokens": 4000,
        },
    ]
}
yaroslavyaroslav commented 3 days ago

As long as the docs updated this is actually not a bug but a feature to implement. Originally by some forgotten sense I made this check local, but I think it's really worth it to leave it to the remote server to decide whether or not token is required and if it if what format and length it should be. So I believe those checks could be safely deleted.