ONLYOFFICE / onlyoffice.github.io

ONLYOFFICE plugins. Code, resources, and styling for the Plugin Marketplace and Plugins Manager.
Apache License 2.0
123 stars 361 forks source link

Allow using ChatGPT Plugin with LocalAI #269

Open socialize-IT opened 1 year ago

socialize-IT commented 1 year ago

As LocalAI is enabling the use of AI for privacy-sensitive use-cases it would be great to have these abilities in OnlyOffice. LocalAI can be used as an drop-in replacement for the OpenAI-API. For using it it would be needed to make the API-URL configurable and disable the need of an API-key as soon as the OpenAI-API is not used.

Nice Add-On would be to set this setting for all users in the OnlyOffice server and allow or disallow the features (like allow summarize but disallow image generation).

sati-bodhi commented 6 months ago

We can simply give the option of leaving the openai api blank and allowing users to enter their local LLM endpoint in place of the API key, just as what NextCloud OpenAI integration is doing over here.

image

The change could be as trivial as factoring out from code.js:

        window.Asc.plugin.executeMethod('StartAction', [isNoBlockedAction ? 'Information' : 'Block', 'ChatGPT: ' + loadingPhrase]);

        switch (type) {
            case 1:
                settings.messages = [ { role: 'user', content: `Summarize this text: "${text}"` } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 2:
                settings.messages = [ { role: 'user', content: `Get Key words from this text: "${text}"` } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 3:
                settings.messages = [ { role: 'user', content: `What does it mean "${text}"?` } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 4:
                settings.messages = [ { role: 'user', content: `Give a link to the explanation of the word "${text}"` } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 5:
                settings.messages = [ { role: 'user', content: text } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 6:
                settings.messages = [ { role: 'user', content: text } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 7:
                delete settings.model;
                delete settings.max_tokens;
                settings.prompt = `Generate image:"${text}"`;
                settings.n = 1;
                settings.size = `${imgsize.width}x${imgsize.height}`;
                settings.response_format = 'b64_json';
                url = 'https://api.openai.com/v1/images/generations';
                break;

            case 8:
                settings.messages = [ { role: 'user', content: `What does it mean "${text}"?` } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 9:
                settings.messages = [ { role: 'user', content: `Give synonyms for the word "${text}" as javascript array` } ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 10:
                imageToBlob(text).then(function(obj) {
                    url = 'https://api.openai.com/v1/images/variations';
                    const formdata = new FormData();
                    formdata.append('image', obj.blob);
                    formdata.append('size', obj.size.str);
                    formdata.append('n', 1);// Number.parseInt(elements.inpTopSl.value));
                    formdata.append('response_format', "b64_json");
                    fetchData(formdata, url, type, isNoBlockedAction);
                });
                break;

            case 11:
                settings.messages = [ { role: 'user', content: `Сorrect the errors in this text: ${text}`} ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 12:
                settings.messages = [ { role: 'user', content: `Rewrite differently and give result on the same language: ${text}`} ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 13:
                settings.messages = [ { role: 'user', content: `Make this text longer and give result on the same language: ${text}`} ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 14:
                settings.messages = [ { role: 'user', content: `Make this text simpler and give result on the same language: ${text}`} ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;

            case 15:
                settings.messages = [ { role: 'user', content: `Make this text shorter and save language: ${text}`} ];
                url = 'https://api.openai.com/v1/chat/completions';
                break;
        }
        if (type !== 10)
            fetchData(settings, url, type, isNoBlockedAction);
    };

https://api.openai.com/v1/ for use with the API key, and replacing the string with:

  1. http://localhost:1234/v1/ for users deploying through LM Studio, and
  2. http://localhost:8080/v1/ for those running LocalAI.

Integration with Ollama will be slightly more complicated because their API endpoints are slightly different.

settings.html can be modified to include the option for users to use their own endpoints in place of OpenAI:

<body>
    <div class="info">
        <span class="i18n">For using ChatGPT with OpenAI, you should get an API key.</span>
        <span class="i18n">Go to</span> <a target="_blank" href="https://beta.openai.com/account/api-keys">OpenAI API keys</a><span>.</span>
        <span class="i18n">Create API keys and copy in this field.</span>
        <span class="i18n">Alternatively, you can use a local language model by entering your endpoint URL below:</span>
    </div>
    <div class="form">
        <input id="inp_key" class="form-control" placeholder="Api key">
        <button class="btn-text-default i18n" id="btn_save">Save</button>
    </div>
    <div class="form">
        <span class="i18n">Select your local LLM provider:</span>
        <div>
            <input type="radio" id="lm_studio" name="llm_provider" value="lm_studio" checked>
            <label for="lm_studio">LM Studio</label>
        </div>
        <div>
            <input type="radio" id="localai" name="llm_provider" value="localai">
            <label for="ollama">OLLAMA</label>
        </div>
        <input id="inp_local_llm" class="form-control" placeholder="http://localhost:1234/v1">
        <button class="btn-text-default i18n" id="btn_save_local_llm">Save Local LLM</button>
    </div>
    <div class="info">
        <span id="err_message" class="err-message"></span>
        <span id="success_message" class="header hidden i18n">Settings saved successfully.</span>
    </div>
    <div id="loader-container" class="asc-loader-container loader hidden"></div>
</body>

Then, with settings.js, just add a function to modify the local LLM endpoint placeholder as llm_provider radio selection changes.

// Add this function inside the window.Asc.plugin.init function, after the existing code
        document.getElementsByName('llm_provider').forEach(function(radio) {
                radio.onclick = function() {
                        let placeholder;
                        if (this.value === 'lm_studio') {
                            placeholder = "http://localhost:1234/v1";
                        } else if (this.value === 'localai') {
                            placeholder = "http://localhost:8080/v1";
                        }
                        document.getElementById('inp_local_llm').placeholder = placeholder;
                };

Do an endpoint check similar to btn_save for btn_save_local_llm in settings.js:

document.getElementById('btn_save_local_llm').onclick = function() {
    document.getElementById('err_message').innerText = '';
    document.getElementById('success_message').classList.add('hidden');
    let endpoint = document.getElementById('inp_local_llm').value.trim();
    if (endpoint.length) {
        createLoader();
        // check local llm endpoint by fetching models
        fetch(endpoint + '/models', {
            method: 'GET'
        }).
        then(function(response) {
            if (response.ok) {
                response.json().then(function(data) {
                    // Process the models
                    // ...
                    // If everything is ok, save the endpoint
                    sendPluginMessage({type: 'onAddLocalLLMEndpoint', endpoint: endpoint});
                });
            } else {
                response.json().then(function(data) {
                    let message = data.error && data.error.message ? data.error.message : errMessage;
                    createError(new Error(message));
                });
            }
        })
        .catch(function(error) {
            createError(error);
        })
        .finally(function(){
            destroyLoader();
        });
    } else {
        createError(new Error(errMessage));
    }
};

Not sure how the rest of the OpenAI API code works together, but this is the gist of it all.

Carlos-err406 commented 1 week ago

i also think this would be very valuable, to be able to set other baseUrls for openai-compatible apis

yuisheaven commented 6 days ago

is there any update on this? I'm urgently waiting for an update like this. OnlyOffice is very useful as it can be self-hosted. Not being able to combine this with self-hosted LLMs is really a bummer