frankfralick / openai-func-enums

A set of Rust macros for working with OpenAI function/tool calls.
MIT License
42 stars 2 forks source link

Integration with clap? #1

Closed WestXu closed 1 year ago

WestXu commented 1 year ago

Great project! The way a rust gpt interface should be.

Looking at the examples, it looks a lot like clap's interface. What if we just use their function and argument parsing logic to prevent all the hassle?

For example, what if we can provide a macro or trait implemented for clap's command and subcommand, and any clap app can instantly become gpt-intergrated, allowing users to just type myapp gpt "**instructions**".

Just a thought.

frankfralick commented 1 year ago

This sounds like a good idea. I was not familiar with clap but just read through their docs. Let me make sure I generally understand what you mean. Say I have an app like this:

use clap::{Parser, Arg};

#[derive(Parser)]
#[clap(author, version, about, long_about = None)]
#[clap(propagate_version = true)]
struct Cli {
    #[clap(subcommand)]
    command: Commands,
}

#[derive(Subcommand, SubcommandGPT)]
enum Commands {
    /// Adds two numbers
    Add {
        a: f64,
        b: f64,
    },
    /// Subtracts two numbers
    Subtract {
        a: f64,
        b: f64,
    },
    /// Multiplies two numbers
    Multiply {
        a: f64,
        b: f64,
    },
    /// Divides two numbers
    Divide {
        a: f64,
        b: f64,
    },
}

fn main() {
    let cli = Cli::parse();

    match cli.command {
        Commands::Add { a, b } => println!("Result: {}", a + b),
        Commands::Subtract { a, b } => println!("Result: {}", a - b),
        Commands::Multiply { a, b } => println!("Result: {}", a * b),
        Commands::Divide { a, b } => {
            if b != 0.0 {
                println!("Result: {}", a / b)
            } else {
                panic!("Cannot divide by zero");
            }
        },
    };
}

You are suggesting that there possibly be an additional macro like "SubcommandGPT". This macro would I guess need to do three things, generate an enum like CommandsGPT that has a single variant that takes the string argument. Also generate a struct and function signature for each "subcommand" that returns the json needed. Then I think(?) we would also need to generate an associated function for the CommandsGPT enum that gets a vector of all the functions, and then makes the request to openai. Something like this:

#[derive(Parser)]
#[clap(author, version, about, long_about = None)]
#[clap(propagate_version = true)]
struct Cli {
    #[clap(subcommand)]
    command: Commands,

    #[clap(subcommand)]
    gpt: CommandsGPT,
}

#[derive(Subcommand, SubcommandGPT)]
enum Commands {
    /// Adds two numbers
    Add {
        a: f64,
        b: f64,
    },
    /// Subtracts two numbers
    Subtract {
        a: f64,
        b: f64,
    },
    /// Multiplies two numbers
    Multiply {
        a: f64,
        b: f64,
    },
    /// Divides two numbers
    Divide {
        a: f64,
        b: f64,
    },
}

Something like that seems like it would be reasonable to do.

Another thought: Not sure how, but it seems like it would maybe be possible to support multi step requests. Like if a prompt was "Multiply 5 and 7, then divide by 3". Would maybe want an additional derive macro like "SubcommandGPTMultstep" so people can opt in, but this would entail an initial call that expects back a vector of strings that transform the initial prompt into a list of stepwise prompts. In this example we would expect a vector to come back like "Multiply 5 and 7." and "Divide the prior result by 3." and then iterate through these strings, and if there's more than one step, after step one we would convert whatever the prior result was to a string, and prepend "The prior result was 35. " to the next step prompt we got back, giving us a new prompt "The prior result was 35. Divide the prior result by 3." I don't know, I'm just thinking out loud. Wouldn't be the simplest thing in the world to do but would be interesting to try and potentially a very nice drop-in improvement to certain clap applications.

Before any additions like this I have changes I want to make, primarily the descriptions that take usize hardcoded. That was a holdover from what I was doing previously, but these can all be calculated at compile time in the procedural macro, and doing that would be more compatible with the conventions used in clap. Would also probably want to have some sort of config arguments like which model to make the requests with.

Let me know your thoughts.

WestXu commented 1 year ago

Wow, thanks for taking this seriously and forward. I like the way you think out loud.

TBH I am not so familiar with what rust macros can possibly do, but I just assume this integration is possible.

As for the interface you provide, it's basically what I had in mind, except that I don't think the gpt subcommand is necessary or possible to put into clap's Cli struct. An additional variant for the Commands enum should be generated on the fly by only the SubcommandGPT macro, based on other existing variants, providing a UI like this:

$ myap gpt "Multiply 5 and 7"
Executing Commands::Multiply {
    a: 5,
    b: 7
}.
Confirm?(yes|no)
yes
35

Clap doesn't need to be aware of this. It's just built on top of it.

Doing this not only because it's convenient for massive clap users, but also because it's so natual to map the ChatGPT's function interface with clap's command interface. They're logically the same thing. So I assume it should be trivial to add the clap integration.

As for the multi-step requests, I think it's great or even necessary for this project if gpt knows and is designed to call multiple functions (I didn't dig into this), but not necessary for the clap integration. CLI tends to be one-off. By the time the above Confirm?(yes|no) is printed, the openai session should already be closed.

frankfralick commented 1 year ago

I'm by no means a macro expert but with procedural macros it isn't allowed to add a variant to an enum, we can only read the token stream of the thing the macro decorates and generate new stucts, enums, functions, etc. That is why I was thinking we would need the existing top level command that allows normal, direct usage of the application, and the a second command that has a single subcommand that on a type we generate that knows how to makes the json required. I started working on this last night so I think things should be clearer once I at least get something basic working.

WestXu commented 1 year ago

with procedural macros it isn't allowed to add a variant to an enum

Oh if that's the case. Sorry for not aware of this. It'd be so cool if this could work.

frankfralick commented 1 year ago

image

This isn't super easy but I can see it should be possible. I am trying to have the macros do as much as possible to make it something that would be a ton of effort to use. An issue is if the consuming app does "match cli.command {..." and then in-lines their command logic. This spoils the ability to move code into the macro. To minimize how much consuming code has to do it is way better to make it implement a trait like "RunCli" and "RunCommand", that has a "run" function that contains what you actually want to do with the input arguments. If that exists in the code then doing let cli = Cli::parse(); cli.run(); can work I think. And this way the logic to actually call openai can be in the macro. The example code I currently have that did what that image shows. At present it is dumb, I'm repeating logic, etc. but I think all of the "extra" dumb code can be replaced with implementation of a run trait, and then its 'cli.run()' (I think).

use async_openai::{
    types::{
        ChatCompletionFunctionCall, ChatCompletionRequestMessageArgs,
        CreateChatCompletionRequestArgs, FunctionCall, Role,
    },
    Client,
};
use clap::{Parser, Subcommand, ValueEnum};
use openai_func_enums::{
    generate_value_arg_info, get_function_chat_completion_args, SubcommandGPT,
};
use serde_json::{json, Value};
use std::error::Error;

#[derive(Parser)]
#[clap(author, version, about, long_about = None)]
#[clap(propagate_version = true)]
struct Cli {
    #[clap(subcommand)]
    command: Commands,
}

#[derive(Debug, Subcommand, SubcommandGPT)]
pub enum Commands {
    /// Adds two numbers
    Add {
        a: f64,
        b: f64,
    },
    /// Subtracts two numbers
    Subtract {
        a: f64,
        b: f64,
    },
    /// Multiplies two numbers
    Multiply {
        a: f64,
        b: f64,
    },
    /// Divides two numbers
    Divide {
        a: f64,
        b: f64,
    },
    GPT {
        a: String,
    },
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    let cli = Cli::parse();

    match cli.command {
        Commands::Add { a, b } => {
            println!("Result: {}", a + b);
        }
        Commands::Subtract { a, b } => println!("Result: {}", a - b),
        Commands::Multiply { a, b } => println!("Result: {}", a * b),
        Commands::Divide { a, b } => {
            if b != 0.0 {
                println!("Result: {}", a / b)
            } else {
                panic!("Cannot divide by zero");
            }
        }
        Commands::GPT { a } => {
            let function_args = get_function_chat_completion_args(CommandsGPT::all_function_jsons)?;
            let request = CreateChatCompletionRequestArgs::default()
                .max_tokens(512u16)
                .model("gpt-4-0613")
                .messages([ChatCompletionRequestMessageArgs::default()
                    .role(Role::User)
                    .content(a)
                    .build()?])
                .functions(function_args.0)
                .function_call("auto")
                .build()?;

            let client = Client::new();
            let response_message = client
                .chat()
                .create(request)
                .await?
                .choices
                .get(0)
                .unwrap()
                .message
                .clone();

            // println!("This is the response message returned:");
            // println!("{:#?}", response_message);

            if let Some(function_call) = response_message.function_call {
                match CommandsGPT::parse_gpt_function_call(&function_call) {
                    Ok(FunctionResponse::AddResponse(response)) => {
                        let result = response.execute_command();
                        match result {
                            Commands::Add { a, b } => {
                                println!("Result: {}", a + b);
                            }
                            _ => {}
                        }
                    }
                    Ok(FunctionResponse::SubtractResponse(response)) => {
                        let result = response.execute_command();
                        match result {
                            Commands::Subtract { a, b } => {
                                println!("Result: {}", a - b);
                            }
                            _ => {}
                        }
                    }
                    Ok(FunctionResponse::DivideResponse(response)) => {
                        let result = response.execute_command();
                        println!("Result: {:#?}", result);
                        match result {
                            Commands::Divide { a, b } => {
                                if b != 0.0 {
                                    println!("Result: {}", a / b)
                                } else {
                                    panic!("Cannot divide by zero");
                                }
                            }
                            _ => {}
                        }
                    }
                    Ok(FunctionResponse::MultiplyResponse(response)) => {
                        let result = response.execute_command();
                        println!("Result: {:#?}", result);
                        match result {
                            Commands::Multiply { a, b } => {
                                println!("Result: {}", a * b);
                            }
                            _ => {}
                        }
                    }
                    Err(e) => {
                        println!("There was an error:  {:#?}", e)
                    }
                    _ => {}
                }
            }
        }
    };
    Ok(())
}
frankfralick commented 1 year ago

A basic solution to this is in bfdd24e and 8641e16. If you have any issues with it or other suggestions don't hesitate to let me know.

WestXu commented 1 year ago

Wow it's so COOOOOL! I'm putting it into every cli I've built right away!