Closed jma7889 closed 1 year ago
It looks like you set the llm_predictor to be the prompt helper (I think this is a mistake) 👀
My example was wrong, so embarrassing. I think the real issue was I need to use ChatOpenAI instead of OpenAI from langchain for LLMPredictor. So the code for that part is changed to following and worked. Close the ticket.
from llama_index import (
LLMPredictor,
ServiceContext,
PromptHelper
)
from langchain.chat_models import ChatOpenAI
# define LLM default is text-davinci-003
default_predictor = LLMPredictor()
davinci_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="text-davinci-002"))
gpt35_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="gpt-3.5-turbo"))
gpt4_32_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="gpt-4-32k"))
gpt4_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.01, model_name="gpt-4"))
When using custom service context, i got following errors. With the latest llama-index 0.6.6 and langchain 0.0.168 If service_context is not used, it works.
error messages