Is your feature request related to a problem? Please describe.
Currently, the elisp function used to call the Python script for GPT model execution is hard-coded to work with a specific model: gpt-3.5-turbo. This limits the flexibility and adaptability of the code for users who may want to work with different models.
Describe the solution you'd like
I propose an enhancement to the elisp function to support model selection, allowing users to easily specify which GPT model they want to use. This can be achieved by accepting the model name as an argument and passing it to the Python script.
Proposed changes:
Update the Python script to accept a model_name argument:
Modify the elisp function to accept the desired model and pass it as an argument to the Python script:
(defun call-python-script-with-model (model)
"Call the Python script with a specific model."
(interactive "sEnter model name: ")
(let ((python-command "python")
(script-path "/path/to/your/python_script.py"))
(call-process
python-command
nil
"*Python Script Output*"
nil
script-path
model)
(switch-to-buffer "*Python Script Output*")))
Describe alternatives you've considered
An alternative solution is to use a configuration file to store the model name. However, this approach requires users to edit the configuration file every time they want to change the model, which is less convenient than simply passing the model name as an argument.
Additional context
This enhancement will provide users with greater flexibility in using various GPT models, making it easier to adapt the code for different use cases and improving the overall user experience.
LATEST MODEL | DESCRIPTION | MAX TOKENS | TRAINING DATA
-- | -- | -- | --
gpt-4 | More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration. | 8,192 tokens | Up to Sep 2021
gpt-4-0314 | Snapshot of gpt-4 from March 14th 2023. Unlike gpt-4, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 8,192 tokens | Up to Sep 2021
gpt-4-32k | Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration. | 32,768 tokens | Up to Sep 2021
gpt-4-32k-0314 | Snapshot of gpt-4-32 from March 14th 2023. Unlike gpt-4-32k, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 32,768 tokens | Up to Sep 2021
Is your feature request related to a problem? Please describe. Currently, the elisp function used to call the Python script for GPT model execution is hard-coded to work with a specific model:
gpt-3.5-turbo
. This limits the flexibility and adaptability of the code for users who may want to work with different models.Describe the solution you'd like I propose an enhancement to the elisp function to support model selection, allowing users to easily specify which GPT model they want to use. This can be achieved by accepting the model name as an argument and passing it to the Python script.
Proposed changes:
model_name
argument:Describe alternatives you've considered An alternative solution is to use a configuration file to store the model name. However, this approach requires users to edit the configuration file every time they want to change the model, which is less convenient than simply passing the model name as an argument.
Additional context This enhancement will provide users with greater flexibility in using various GPT models, making it easier to adapt the code for different use cases and improving the overall user experience.
LATEST MODEL | DESCRIPTION | MAX TOKENS | TRAINING DATA -- | -- | -- | -- gpt-4 | More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration. | 8,192 tokens | Up to Sep 2021 gpt-4-0314 | Snapshot of gpt-4 from March 14th 2023. Unlike gpt-4, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 8,192 tokens | Up to Sep 2021 gpt-4-32k | Same capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration. | 32,768 tokens | Up to Sep 2021 gpt-4-32k-0314 | Snapshot of gpt-4-32 from March 14th 2023. Unlike gpt-4-32k, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023. | 32,768 tokens | Up to Sep 2021