langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.81k stars 2.21k forks source link

weird behaviour with fine tuned gpt-3.5-turbo-0613 model #2620

Closed Cesar-M-Diaz closed 11 months ago

Cesar-M-Diaz commented 1 year ago

Hi everyone

My langchain version is 0.0.148

In my team we are trying to use a fine tuned gpt-3.5-turbo-0613 model with a chat-conversational-react-description agent, but the behaviour of the agent is really random.

The responses only say text, as shown below, and ignore the formatting of the actions (an object with action and action_input attributes): image

Is there a special configuration for fine tuned models that we may be missing ?

we configured the ChatOpenAI model like this:

const model = new ChatOpenAI({
  temperature: 0.0,
  modelName: "ft:gpt-3.5-turbo-0613:{our_org_name}::{our_model_id}",
});

when we use the regular models the app work fine.

dosubot[bot] commented 1 year ago

🤖

Hello,

Thank you for reaching out and providing detailed information about your issue.

From your description, it seems like the issue might be related to how the fine-tuned model is interpreting and formatting the actions. The ChatOpenAI class in LangChainJS uses the predictMessages method to predict the responses of the chat model. This method should work with fine-tuned models as well, as long as the model is correctly specified.

However, the formatting of the actions is handled by the createPrompt method in the ChatConversationalAgent class. This method formats the prompt by creating a system message, a human message, and a list of tool descriptions. It also uses an output parser to get format instructions.

If the responses from your fine-tuned model are not formatted correctly, it might be because the output parser is not correctly interpreting the responses from the fine-tuned model. This could be due to differences in the output format of the fine-tuned model compared to the regular models.

To troubleshoot this issue, I would recommend checking the following:

  1. Ensure that the fine-tuned model is correctly specified in the ChatOpenAI constructor. The model name should be in the format ft:gpt-3.5-turbo-0613:{your_org_name}::{your_model_id}.

  2. Check the output format of the fine-tuned model. If the output format is different from the regular models, you might need to customize the output parser in the createPrompt method to correctly interpret the responses from the fine-tuned model.

  3. If the issue persists, it might be helpful to debug the createPrompt method to see how it is formatting the actions. You can do this by logging the output of the createPrompt method and comparing it to the expected output.

I hope this helps! If you have any further questions or if the issue persists, please don't hesitate to ask.

Best regards, Dosu

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

jacoblee93 commented 1 year ago

Hey @Cesar-M-Diaz! Can you tell me a bit more about how you've fine-tuned your model?

JungMinu commented 1 year ago

@jacoblee93 we trained the model using the official openai api according to the official openai doc: https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset We tested the fine-tuned model using openai api and the openai playground and confirmed the fined-tuned model answers as expected except for langchain. for langchain, it always answers as text only.

jacoblee93 commented 1 year ago

Gotcha - will dig in. @Cesar-M-Diaz can you share that LangChain trace with me?

I fine-tuned a model that's working ok myself but there could be something with how it's being called?

JungMinu commented 1 year ago

@jacoblee93 here's the trace:

[chain/start] [1:chain:AgentExecutor] Entering Chain run with input: {
  "input": "System: Remember your rules before reply.\n   Human: what is node",
  "chat_history": []
}
[chain/start] [1:chain:AgentExecutor > 2:chain:LLMChain] Entering Chain run with input: {
  "input": "System: Remember your rules before reply.\n   Human: what is node",
  "chat_history": [],
  "agent_scratchpad": [],
  "stop": [
    "Observation:"
  ]
}
[llm/start] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] Entering LLM run with input: {
  "messages": [
    [
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "SystemMessage"
        ],
        "kwargs": {
          "content": "Your description and rules:\n\nJust reply \"text\": \"{\\n    \\\"action\\\": \\\"Final Answer\\\",\\n    \\\"action_input\\\": \"Sorry, I am not able to answer this question.\"\\n}\" instead.\n",
          "additional_kwargs": {}
        }
      },
      {
        "lc": 1,
        "type": "constructor",
        "id": [
          "langchain",
          "schema",
          "HumanMessage"
        ],
        "kwargs": {
          "content": "TOOLS\n------\nSystem: Remember your rules before reply.\n   Human: what is node",
          "additional_kwargs": {}
        }
      }
    ]
  ]
}
[llm/end] [1:chain:AgentExecutor > 2:chain:LLMChain > 3:llm:ChatOpenAI] [985ms] Exiting LLM run with output: {
  "generations": [
    [
      {
        "text": "text",
        "message": {
          "lc": 1,
          "type": "constructor",
          "id": [
            "langchain",
            "schema",
            "AIMessage"
          ],
          "kwargs": {
            "content": "text",
            "additional_kwargs": {}
          }
        }
      }
    ]
  ],
  "llmOutput": {
    "tokenUsage": {}
  }
}
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [987ms] Exiting Chain run with output: {
  "text": "text"
}
Cesar-M-Diaz commented 1 year ago

@jacoblee93 to create the finetune for the model we need to use the training data like this one? this is an example taken from this video , apparently we need to add the steps of the tool selection and the execution when we are using an agent. example:

[{'role': 'system',
  'content': 'Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.'},
 {'role': 'user',
  'content': 'TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n> Vector Search Tool: This tool allows you to get research information about LLMs.\n\nRESPONSE FORMAT INSTRUCTIONS\n----------------------------\n\nWhen responding to me, please output a response in one of two formats:\n\n**Option 1:**\nUse this if you want the human to use a tool.\nMarkdown code snippet formatted in the following schema:\n\n```json\n{\n    "action": string, \\ The action to take. Must be one of Vector Search Tool\n    "action_input": string \\ The input to the action\n}\n```\n\n**Option #2:**\nUse this if you want to respond directly to the human. Markdown code snippet formatted in the following schema:\n\n```json\n{\n    "action": "Final Answer",\n    "action_input": string \\ You should put what you want to return to use here\n}\n```\n\nUSER\'S INPUT\n--------------------\nHere is the user\'s input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\nWhat is the main focus of the technical report IDSIA-01-11?'},
 {'role': 'assistant',
  'content': '```json\n{\n    "action": "Vector Search Tool",\n    "action_input": "Main focus of technical report IDSIA-01-11"\n}\n```'},
 {'role': 'user',
  'content': '[High-Performance Neural Networks\nfor Visual Object Classi\x0ccation\nDan C. Cire\x18 san, Ueli Meier, Jonathan Masci,\nLuca M. Gambardella and J\x7f urgen Schmidhuber\nTechnical Report No. IDSIA-01-11\nJanuary 2011\nIDSIA / USI-SUPSI\nDalle Molle Institute for Arti\x0ccial Intelligence\nGalleria 2, 6928 Manno, Switzerland\nIDSIA is a joint institute of both University of Lugano (USI) and University of Applied Sciences of Southern Switzerland (SUPSI),\nand was founded in 1988 by the Dalle Molle Foundation which promoted quality of life.\nThis work was partially supported by the Swiss Commission for Technology and Innovation (CTI), Project n. 9688.1 IFF:\nIntelligent Fill in Form.arXiv:1102.0183v1  [cs.AI]  1 Feb 2011\nTechnical Report No. IDSIA-01-11 1\nHigh-Performance Neural Networks\nfor Visual Object Classi\x0ccation\nDan C. Cire\x18 san, Ueli Meier, Jonathan Masci,\nLuca M. Gambardella and J\x7f urgen Schmidhuber\nJanuary 2011\nAbstract\nWe present a fast, fully parameterizable GPU implementation of Convolutional Neural\nNetwork variants. Our feature extractors are neither carefully designed nor pre-wired, but]'},
 {'role': 'assistant',
  'content': '```json\n{\n    "action": "Final Answer",\n    "action_input": "The main focus of the technical report IDSIA-01-11 is the presentation of a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants for visual object classification."\n}\n```'}]
dosubot[bot] commented 11 months ago

Hi, @Cesar-M-Diaz

I'm helping the langchainjs team manage their backlog and am marking this issue as stale. From what I understand, you are experiencing random behavior and formatting issues with a fine-tuned gpt-3.5-turbo-0613 model in the chat-conversational-react-description agent. There have been discussions about potential issues with the output parser and formatting of actions, as well as a request for more details on the fine-tuning process.

Could you please confirm if this issue is still relevant to the latest version of the langchainjs repository? If it is, please let the langchainjs team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!