waylaidwanderer / node-chatgpt-api

A client implementation for ChatGPT and Bing AI. Available as a Node.js module, REST API server, and CLI app.
https://www.npmjs.com/package/@waylaidwanderer/chatgpt-api
MIT License
4.19k stars 737 forks source link

Custom model not being used in UI when modelOptions is changed in Node API setup #215

Open axsddlr opened 1 year ago

axsddlr commented 1 year ago

Title: Custom model not being used in UI when modelOptions is changed in Node API setup


Description:

I changed modelOptions.temperature to modelOptions in the Node API setup, as I wanted to use a custom model. However, when I set up a custom model via the UI in PanadoraAI, it still defaults to the 3.5-turbo model. I understand that 3.5-turbo is the fallback model, but I would like the UI to override this when the modelOptions are changed.

Steps to reproduce:

  1. Change modelOptions.temperature to modelOptions in the Node API setup.
  2. Set up a custom model via the UI in PanadoraAI.
  3. Observe that the model still defaults to 3.5-turbo.

Expected behavior:

When a custom model is set up in the UI, it should use the provided model configuration as specified in the modelOptions.

Actual behavior:

Despite changing the modelOptions in the Node API setup, the UI still defaults to the 3.5-turbo model.

Additional context:

I would like the UI to reflect the custom model configuration specified in the modelOptions and not use the fallback 3.5-turbo model when I have set up a custom model.


Please let me know if there is a workaround or if this issue is being addressed in a future update. Thank you!

waylaidwanderer commented 1 year ago

Did you click the save button after modifying the model?

axsddlr commented 1 year ago

?? Why I wouldn’t I On Sun, Mar 19, 2023 at 4:58 PM Joel @.***> wrote:

Did you click the save button after modifying the model?

— Reply to this email directly, view it on GitHub https://github.com/waylaidwanderer/node-chatgpt-api/issues/215, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAO5EM6HJPQCCEYC3VB6GALW45XOTANCNFSM6AAAAAAWAGK3OU . You are receiving this because you authored the thread.Message ID: @.***>

waylaidwanderer commented 1 year ago

Just wanted to make sure. I tested changing the model, and it works for me.

Could you remove the perMessageClientOptionsWhitelist.chatgpt property and see if that works?

axsddlr commented 1 year ago

Just wanted to make sure. I tested changing the model, and it works for me.

Could you remove the perMessageClientOptionsWhitelist.chatgpt property and see if that works?

Now i might be stupid as hell however, only hardcoding the model is only way it works, i.e

modelOptions: {
            // You can override the model name and any other parameters here.
            // The default model is `gpt-3.5-turbo`.
            model: 'gpt-4',
            // Set max_tokens here to override the default max_tokens of 1000 for the completion.
            // max_tokens: 1000,
        },

seems pandaroaai isnt not overriding that default in the ui

waylaidwanderer commented 1 year ago

Can you show me your node-chatgpt-api's settings.js?

waylaidwanderer commented 1 year ago

Also, what version of node-chatgpt-api are you running, and how are you running the server? (e.g. using Docker, running it directly with npm start, etc.)

axsddlr commented 1 year ago

Also, what version of node-chatgpt-api are you running, and how are you running the server? (e.g. using Docker, running it directly with npm start, etc.)

running latest version of node-chatgpt-api running in docker container via compose managed by portainer settings.js - added cookie to bing, changed debug to true and removed modelOptions.temp to modelOptions

export default {
    // Options for the Keyv cache, see https://www.npmjs.com/package/keyv.
    // This is used for storing conversations, and supports additional drivers (conversations are stored in memory by default).
    // Only necessary when using `ChatGPTClient`, or `BingAIClient` in jailbreak mode.
    cacheOptions: {},
    // If set, `ChatGPTClient` and `BingAIClient` will use `keyv-file` to store conversations to this JSON file instead of in memory.
    // However, `cacheOptions.store` will override this if set
    storageFilePath: process.env.STORAGE_FILE_PATH || './cache.json',
    chatGptClient: {
        // Your OpenAI API key (for `ChatGPTClient`)
        openaiApiKey: process.env.OPENAI_API_KEY || '',
        // (Optional) Support for a reverse proxy for the completions endpoint (private API server).
        // Warning: This will expose your `openaiApiKey` to a third party. Consider the risks before using this.
        // reverseProxyUrl: 'https://chatgpt.hato.ai/completions',
        // (Optional) Parameters as described in https://platform.openai.com/docs/api-reference/completions
        modelOptions: {
            // You can override the model name and any other parameters here.
            // The default model is `gpt-3.5-turbo`.
            model: 'gpt-3.5-turbo',
            // Set max_tokens here to override the default max_tokens of 1000 for the completion.
            // max_tokens: 1000,
        },
        // (Optional) Davinci models have a max context length of 4097 tokens, but you may need to change this for other models.
        // maxContextTokens: 4097,
        // (Optional) You might want to lower this to save money if using a paid model like `text-davinci-003`.
        // Earlier messages will be dropped until the prompt is within the limit.
        // maxPromptTokens: 3097,
        // (Optional) Set custom instructions instead of "You are ChatGPT...".
        // (Optional) Set a custom name for the user
        // userLabel: 'User',
        // (Optional) Set a custom name for ChatGPT ("ChatGPT" by default)
        // chatGptLabel: 'Bob',
        // promptPrefix: 'You are Bob, a cowboy in Western times...',
        // A proxy string like "http://<ip>:<port>"
        proxy: '',
        // (Optional) Set to true to enable `console.debug()` logging
        debug: true,
    },
    // Options for the Bing client
    bingAiClient: {
        // Necessary for some people in different countries, e.g. China (https://cn.bing.com)
        host: '',
        // The "_U" cookie value from bing.com
        userToken: '',
        // If the above doesn't work, provide all your cookies as a string instead
        cookies: '',
        // A proxy string like "http://<ip>:<port>"
        proxy: '',
        // (Optional) Set to true to enable `console.debug()` logging
        debug: false,
    },
    chatGptBrowserClient: {
        // (Optional) Support for a reverse proxy for the conversation endpoint (private API server).
        // Warning: This will expose your access token to a third party. Consider the risks before using this.
        reverseProxyUrl: 'https://bypass.duti.tech/api/conversation',
        // Access token from https://chat.openai.com/api/auth/session
        accessToken: '',
        // Cookies from chat.openai.com (likely not required if using reverse proxy server).
        cookies: '',
        // A proxy string like "http://<ip>:<port>"
        proxy: '',
        // (Optional) Set to true to enable `console.debug()` logging
        debug: false,
    },
    // Options for the API server
    apiOptions: {
        port: process.env.API_PORT || 3000,
        host: process.env.API_HOST || 'localhost',
        // (Optional) Set to true to enable `console.debug()` logging
        debug: false,
        // (Optional) Possible options: "chatgpt", "chatgpt-browser", "bing". (Default: "chatgpt")
        clientToUse: 'chatgpt',
        // (Optional) Generate titles for each conversation for clients that support it (only ChatGPTClient for now).
        // This will be returned as a `title` property in the first response of the conversation.
        generateTitles: false,
        // (Optional) Set this to allow changing the client or client options in POST /conversation.
        // To disable, set to `null`.
        perMessageClientOptionsWhitelist: {
            // The ability to switch clients using `clientOptions.clientToUse` will be disabled if `validClientsToUse` is not set.
            // To allow switching clients per message, you must set `validClientsToUse` to a non-empty array.
            validClientsToUse: ['bing', 'chatgpt', 'chatgpt-browser'], // values from possible `clientToUse` options above
            // The Object key, e.g. "chatgpt", is a value from `validClientsToUse`.
            // If not set, ALL options will be ALLOWED to be changed. For example, `bing` is not defined in `perMessageClientOptionsWhitelist` above,
            // so all options for `bingAiClient` will be allowed to be changed.
            // If set, ONLY the options listed here will be allowed to be changed.
            // In this example, each array element is a string representing a property in `chatGptClient` above.
            chatgpt: [
                'promptPrefix',
                'userLabel',
                'chatGptLabel',
                // Setting `modelOptions.temperature` here will allow changing ONLY the temperature.
                // Other options like `modelOptions.model` will not be allowed to be changed.
                // If you want to allow changing all `modelOptions`, define `modelOptions` here instead of `modelOptions.temperature`.
                'modelOptions',
            ],
        },
    },
    // Options for the CLI app
    cliOptions: {
        // (Optional) Possible options: "chatgpt", "bing".
        // clientToUse: 'bing',
    },
};
waylaidwanderer commented 1 year ago

Can you try setting perMessageClientOptionsWhitelist just like this? Curious if that has any effect.

perMessageClientOptionsWhitelist: {
    validClientsToUse: ['bing', 'chatgpt', 'chatgpt-browser'],
},
axsddlr commented 1 year ago

Can you try setting perMessageClientOptionsWhitelist just like this? Curious if that has any effect.

perMessageClientOptionsWhitelist: {
    validClientsToUse: ['bing', 'chatgpt', 'chatgpt-browser'],
},

yeap that fixed the issue for me, changing the model back and forth works and reflects in api via pandora

waylaidwanderer commented 1 year ago

Okay, we can rule that out as an issue with PandoraAI. I'll transfer this issue to node-chatgpt-api.

One last thing, can you completely rebuild the docker image/container (after you change the settings back to how it was before) and see if that helps? Just want to rule out that you're running an older version, as someone else had an issue where they pulled the latest version, but their docker image was still on the older version.

axsddlr commented 1 year ago

Okay, we can rule that out as an issue with PandoraAI. I'll transfer this issue to node-chatgpt-api.

One last thing, can you completely rebuild the docker image/container (after you change the settings back to how it was before) and see if that helps? Just want to rule out that you're running an older version, as someone else had an issue where they pulled the latest version, but their docker image was still on the older version.

each of these test cases were done from a full clean deployment (deleting image of the package along with docker ps instance)

waylaidwanderer commented 1 year ago

each of these test cases were done from a full clean deployment (deleting image of the package along with docker ps instance)

Thanks!