manateelazycat / mind-wave

Emacs AI plugin based on ChatGPT API
GNU General Public License v3.0
159 stars 26 forks source link

Enhancement: Model Selection Support for Elisp Function #20

Closed randomwangran closed 1 year ago

randomwangran commented 1 year ago

Is your feature request related to a problem? Please describe. Currently, the elisp function used to call the Python script for GPT model execution is hard-coded to work with a specific model: gpt-3.5-turbo. This limits the flexibility and adaptability of the code for users who may want to work with different models.

Describe the solution you'd like I propose an enhancement to the elisp function to support model selection, allowing users to easily specify which GPT model they want to use. This can be achieved by accepting the model name as an argument and passing it to the Python script.

Proposed changes:

  1. Update the Python script to accept a model_name argument:
# python_script.py
import openai

def send_stream_request(model_name, messages, callback):
    response = openai.ChatCompletion.create(
        model=model_name,
        messages=messages,
        temperature=0,
        stream=True
    )
    # ...rest of the code
  1. Modify the elisp function to accept the desired model and pass it as an argument to the Python script:
    (defun call-python-script-with-model (model)
    "Call the Python script with a specific model."
    (interactive "sEnter model name: ")
    (let ((python-command "python")
        (script-path "/path/to/your/python_script.py"))
    (call-process
     python-command
     nil
     "*Python Script Output*"
     nil
     script-path
     model)
    (switch-to-buffer "*Python Script Output*")))

    Describe alternatives you've considered An alternative solution is to use a configuration file to store the model name. However, this approach requires users to edit the configuration file every time they want to change the model, which is less convenient than simply passing the model name as an argument.

Additional context This enhancement will provide users with greater flexibility in using various GPT models, making it easier to adapt the code for different use cases and improving the overall user experience.

LATEST MODEL | DESCRIPTION | MAX TOKENS | TRAINING DATA -- | -- | -- | -- gpt-4 | More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration. | 8,192 tokens | Up to Sep 2021 gpt-4-0314 | Snapshot of gpt-4 from March 14th 2023. Unlike gpt-4, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 8,192 tokens | Up to Sep 2021 gpt-4-32k | Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration. | 32,768 tokens | Up to Sep 2021 gpt-4-32k-0314 | Snapshot of gpt-4-32 from March 14th 2023. Unlike gpt-4-32k, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 32,768 tokens | Up to Sep 2021
manateelazycat commented 1 year ago

I'm still in ChatGPT GPT-4 waitlist, haven't chance to test GPT-4 model

manateelazycat commented 1 year ago

Done, please update to newest to choose model.