simonw / llm

Access large language models from the command-line
https://llm.datasette.io
Apache License 2.0
4.32k stars 237 forks source link

Reconsider `llm chatgpt` command and general command design #17

Closed simonw closed 1 year ago

simonw commented 1 year ago

Options for an interactive chat mode:

  • It's a new command - llm chat -m gpt-4 for example. This feels a bit odd since the current default command is actually llm chatgpt ... and llm chat feels confusing.
  • It's part of the default command: llm --chat -4 starts one running.

Maybe the llm chatgpt command is mis-named, especially since it can be used to work with GPT-4.

I named llm chatgpt that because I thought I'd have a separate command for bard and for llama and so-on, and because I thought the other OpenAI complete APIs (the non-chat ones, like GPT-3) may end up with a separate command.

Originally posted by @simonw in https://github.com/simonw/llm/issues/6#issuecomment-1590638188

simonw commented 1 year ago

I need a top-level design that supports the following:

simonw commented 1 year ago

Maybe llm chatgpt becomes llm openai.

I need to figure out how the completion APIs like GPT-3 will work within that llm openai command though.

sderev commented 1 year ago

What do you think of keeping llm chatgpt while also adding llm openai?

Or a longer version (but maybe better for clarity):

As for the "chat mode", it could be done by using the flag -i or --interactive.

Also, llm would use llm chatgpt by default, but this could be changed in a configuration file.

sderev commented 1 year ago

For ChatGPT, there's also the context length to consider:

simonw commented 1 year ago

I'm going to create:a new default command called prompt - which can take the user's preferences into account before passing on to some other command (just openai for the moment).

UPDATE: No, that doesn't work - because I need all of the various options and arguments on the command to be available on that default command too. So I should stick with openai as the default.

simonw commented 1 year ago

Changed my mind again - I think llm prompt is indeed the way to go here.

I'm going to make that the default command, and have it expose a subset of functionality that I expect to be common across all models - it will accept a prompt and a model and run that prompt.

If you need to do something specialized with custom options, you can use llm name-of-model instead. llm prompt will be the lowest common denominator.

simonw commented 1 year ago

I want to make streaming mode the default - I'm fed up of forgetting to add -s to everything. I don't see any harm in it as a default, people can turn it off with --no-stream if they really want to.

simonw commented 1 year ago

Here's the set of options for the prompt command now:

https://github.com/simonw/llm/blob/68c3848eb38e9b32c1273c4cfcb5016d6b5e8d93/llm/cli.py#L32-L51

(I just removed the -4 option).

Do these make sense as a set of options for any generic model?

Looking at them in turn:

I'm happy with these as the standard set of options. I don't think it's too harmful that some of them won't make sense for every model and should return errors if used incorrectly.

simonw commented 1 year ago

As part of the templates feature I'll be adding a -t/--template name-of-template option, which will definitely work for all models.

simonw commented 1 year ago

Further work on this will happen here: