nasirus / llama_index

MIT License
1 stars 0 forks source link

Output parsers breaks for Chatgpt #4

Open nasirus opened 1 year ago

nasirus commented 1 year ago

i noticed that the guardrails outparser breaks and outputs the rails specification when i try to use gpt3.5-turbo wiith llamaindex llm_predictor = StructuredLLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")) i don't have any issue with other models.

nasirus commented 1 year ago

Based on the information provided, it appears that the output parser is breaking when using the ChatGPT model with the llama_index project. To resolve this issue, I suggest the following steps:

  1. Check the version of the ChatGPT model being used. It is possible that the version of the model is not compatible with the version of the llama_index project.

  2. Check the parameters being passed to the ChatGPT model. It is possible that the parameters being passed are not compatible with the version of the model being used.

  3. Check the code for any syntax errors. It is possible that the code is not properly formatted and is causing the output parser to break.

  4. Check the documentation for the ChatGPT model. It is possible that the documentation is not up to date and is causing the output parser to break.

  5. Check the logs for any errors. It is possible that the logs are not properly configured and are causing the output parser to break.

I hope this helps to resolve the issue. If you have any further questions or need additional assistance, please let me know.