danny-avila / LibreChat

Enhanced ChatGPT Clone: Features Anthropic, AWS, OpenAI, Assistants API, Azure, Groq, o1, GPT-4o, Mistral, OpenRouter, Vertex AI, Gemini, Artifacts, AI model switching, message search, langchain, DALL-E-3, ChatGPT Plugins, OpenAI Functions, Secure Multi-User System, Presets, completely open-source for self-hosting. Actively in public development.
https://librechat.ai/
MIT License
17.39k stars 2.89k forks source link

Enhancement: System-wide Custom model Options via librechat.yaml #1617

Closed K-J-VV closed 4 months ago

K-J-VV commented 7 months ago

What features would you like to see added?

It would be nice if we can also customize the chatGptLabel and promptPrefix. Assuming an admin wants all users to have the same experience from the LLM, setting a default Prompt would be useful and then setting a default name for the LLM chatbot.

Here is an example config.json:

{
    "presetId": "Random String",
    "model": "gpt-3.5-turbo",
    "chatGptLabel": "Wall-E",
    "promptPrefix": "You are Wall-E, from the Disney movie, Wall-E.",
    "temperature": 1,
    "top_p": 1,
    "presence_penalty": 0,
    "frequency_penalty": 0,
    "resendImages": false,
    "imageDetail": "auto",
    "endpoint": "LocalAI",
    "endpointType": "custom",
    "title": "New Chat"
}

More details

The guide here: https://docs.librechat.ai/install/configuration/custom_config.html

Walks through how to set the following default parameters:

Breakdown of Default Params
model : The selected model from list of models.
temperature : Defaults to 1 if not provided via preset,
top_p : Defaults to 1 if not provided via preset,
presence_penalty : Defaults to 0 if not provided via preset,
frequency_penalty : Defaults to 0 if not provided via preset,
stop : Sequences where the AI will stop generating further tokens. By default, uses the start token ( ||> ), the user label ( \nUser: ), and end token ( <|diff_marker|> ). Up to 4 sequences can be provided to the [OpenAI API](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stop)
user : A unique identifier representing your end-user, which can help OpenAI to [monitor and detect abuse ](https://platform.openai.com/docs/api-reference/chat/create#chat-create-user).
stream : If set, partial message deltas will be sent, like in ChatGPT. Otherwise, generation will only be available when completed.
messages : [OpenAI format for messages ](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages); the name field is added to messages with system and assistant roles when a custom name is specified via preset.

But chatGptLabel and promptPrefix are not options to configure

Which components are impacted by your request?

No response

Pictures

No response

Code of Conduct

danny-avila commented 7 months ago

Thanks for your suggestion. This is planned as discussed here: https://github.com/danny-avila/LibreChat/discussions/1291

ArmykOliva commented 6 months ago

Is it already possible to add a predefined custom system message for a model?

danny-avila commented 5 months ago

@K-J-VV

But chatGptLabel and promptPrefix are not options to configure

this issue will take care of the concern you highlighted here as well as this: https://github.com/danny-avila/LibreChat/discussions/1291

Starting work on this within the week.

danny-avila commented 4 months ago

I've made significant progress and should be done tomorrow, but wanted to share how it's shaping up.

To give a name to this feature, I landed on "model specs", which make up a dropdown of admin-configured presets for a simpler interface experience.

image

They will be defined from the librechat.yaml file, here is a snippet of how my preview image is configured:

modelSpecs:
  list:
    - name: "commander_01"
      label: "Commander in Chief"
      # default: true
      description: "An AI roleplaying as the 50th President of the United States."
      iconURL: "https://i.kym-cdn.com/entries/icons/facebook/000/017/252/2f0.jpg"
      preset:
        chatGptLabel: "Mr. President"
        endpoint: "Ollama"
        greeting: "Greetings, my fellow American."
        frequency_penalty: 0
        model: "llama3:latest"
        presence_penalty: 0
        promptPrefix: >
          As the 50th President of the United States, you are tasked with making decisions that will shape the future of the nation. Your decisions will impact the economy, foreign policy, and the lives of millions of Americans. Your goal is to lead the country with wisdom and integrity. Make decisions that will benefit the nation and its people. Remember, the fate of the nation is in your hands.
        resendFiles: false
        temperature: 0.8
        top_p: 0.5
    - name: "vision_pro"
      label: "Vision Pro"
      description: "An AI specialized in generating detailed descriptions of images."
      iconURL: "openAI"
      preset:
        chatGptLabel: "Vision Helper"
        greeting: "What's up!!"
        endpoint: "openAI"
        model: "gpt-4-turbo"
        promptPrefix: >
          Examine images closely to understand its style, colors, composition, and other elements. Then, craft a detailed prompt to that closely resemble the original. Your focus is on accuracy in replicating the style, colors, techniques, and details of the original image in written form. Your prompt must be excruciatingly detailed as it will be given to an image generating AI for image generation.
        temperature: 0.8
        top_p: 1

There is a lot of customization possible, and also, you will be able to control interface elements as needed.

By default, defining model specs will hide the default dropdowns/settings as pictured above, but they can all be enabled as desired:

interface:
    endpointsMenu: true
    modelSelect: true
    parameters: true
    presets: true
    sidePanel: false
modelSpecs:
# ...

image

You can also enforce only being able to send messages if the settings for the specs you defined are matching exactly.

image


It's a stretch goal of mine to be able to define tools as part of presets, for this feature and in general. If not with this initial implementation, right after.

NeverOccurs commented 4 months ago

It is overriding custom part of the librechat.yaml. Is there a way to be able to use both custom endpoints as well as customized model specs?

danny-avila commented 4 months ago

It is overriding custom part of the librechat.yaml. Is there a way to be able to use both custom endpoints as well as customized model specs?

Yes, you would set them like this:

interface:
    endpointsMenu: true
    modelSelect: true
    parameters: true
    presets: true
    sidePanel: true

modelSpecs:
  enforce: false # <--- important for the desired behavior
  prioritize: false # <--- important for the desired behavior
  list:
    - name: "your_spec_name"
# rest omitted

https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/interface https://www.librechat.ai/docs/configuration/librechat_yaml/object_structure/model_specs