RockChinQ / CallingGPT

Build your own ChatGPT plugin platform with GPT's function calling ability | func call by GPT
66 stars 3 forks source link

Option to bypass function excecution return msg, only show next TEXT return msg #5

Closed rzc331 closed 1 year ago

rzc331 commented 1 year ago

https://github.com/RockChinQ/CallingGPT/blob/51f3cd7245ac50e42dcd9df8a70d70676be4b505/CallingGPT/session/session.py#L49-L65

These lines of code will now excecute the corresponding function and print out the return value from the function, which is great.

I wonder if we could make a step even further: if a function call is chosen by ChatGPT, we excecute the function, and directly send back a msg to ChatGPT again with an appended msg format like this:

{ "role": "function", "name": name_of_the_function_excecuted "content": function_return_value }

and we only return text reply from ChatGPT for session.ask(). (Possibly, ChatGPT can call multiple functions in a row, and the user only wants the final answer)

RockChinQ commented 1 year ago

yeah, this feature has already been planned (in my mind), but maybe it should go after #4 .

RockChinQ commented 1 year ago

Check resp_chain.py for the PoC of continuely function calling.

rzc331 commented 1 year ago

Check resp_chain.py for the PoC of continuely function calling.

I followed your example and added an option 'fc_chain' in Session.ask, it does the trick, but may be a bit verbose:

class Session:

    namespace: Namespace = None

    messages: list[dict] = []

    model: str = "gpt-3.5-turbo-0613"

    def __init__(self, modules: list, model: str = "gpt-3.5-turbo-0613", **kwargs):
        self.namespace = Namespace(modules)
        self.model = model
        self.resp_log = []
        self.args = {
            "model": self.model,
            "messages": self.messages,
            **kwargs
        }
        if len(self.namespace.functions_list) > 0:
            self.args['functions'] = self.namespace.functions_list
            self.args['function_call'] = "auto"

    def ask(self, msg: str, fc_chain: bool = False) -> dict:
        self.messages.append(
            {
                "role": "user",
                "content": msg
            }
        )

        resp = openai.ChatCompletion.create(
            **self.args
        )
        self.resp_log.append(resp)

        logging.debug("Response: {}".format(resp))
        reply_msg = resp["choices"][0]['message']

        ret = {}

        if fc_chain:
            while 'function_call' in reply_msg:
                resp = self.fc_chain(reply_msg['function_call'])
                reply_msg = resp["choices"][0]['message']
            ret = {
                "type": "message",
                "value": reply_msg['content'],
            }

            self.messages.append({
                "role": "assistant",
                "content": reply_msg['content']
            })

            return ret

        else:
            if 'function_call' in reply_msg:

                fc = reply_msg['function_call']
                args = json.loads(fc['arguments'])
                call_ret = self._call_function(fc['name'], args)

                self.messages.append({
                    "role": "function",
                    "name": fc['name'],
                    "content": str(call_ret)
                })

                ret = {
                    "type": "function_call",
                    "func": fc['name'].replace('-', '.'),
                    "value": call_ret,
                }
            else:
                ret = {
                    "type": "message",
                    "value": reply_msg['content'],
                }

                self.messages.append({
                    "role": "assistant",
                    "content": reply_msg['content']
                })

            return ret

    def fc_chain(self, fc_cmd: dict):
        """
        Excecute the function call and return the result to ChatGPT.

        Args:
            fc_cmd(dict): The function call command.

        Returns:
            dict: The response from ChatGPT.
        """
        fc_args = json.loads(fc_cmd['arguments'])
        call_ret = self._call_function(fc_cmd['name'], fc_args)

        self.messages.append({
            "role": "function",
            "name": fc_cmd['name'],
            "content": f'function successfully called with return value: {str(call_ret)}'
        })
        resp = openai.ChatCompletion.create(
            **self.args
        )
        self.resp_log.append(resp)

        return resp

    def _call_function(self, function_name: str, args: dict):
        return self.namespace.call_function(function_name, args)

One thing to note here is that, if we only put str(call_ret) in the content, it may enter an infinite loop calling the same function if the call_ret is not clearly expressing a function call "success". After I added the prefix "function successfully called with return value: ", this issue was much relieved.

rzc331 commented 1 year ago

"content": f'function successfully called with return value: {str(call_ret)}' noticed even this prompt can suffer multi-calling the same function issue.

"content": f'function successfully called with return value: {str(call_ret)}, please go to next step.' this prompt works much better