Closed mintisan closed 1 year ago
Hey @mintisan thanks for raising this. We are in the process of adding more chat engine implementations, stay tuned on this!
hi @Disiok , I used ChatGPTLLMPredictor
and update query_engine after each query to achieve this function as follows, you can see if there is any problem.
step 1. define update_model
from llama_index.llm_predictor.chatgpt import ChatGPTLLMPredictor
from langchain.prompts.chat import SystemMessagePromptTemplate
from langchain.prompts.chat import HumanMessagePromptTemplate
from langchain.prompts.chat import AIMessagePromptTemplate
## messages - "role": "system",
prepend_messages = [
SystemMessagePromptTemplate.from_template(
"You are a system ......"
),
]
# append messages : user & assistant
def update_model(query_str, response_str):
# use ChatGPT [beta]
if query_str and response_str:
prepend_messages.append(
HumanMessagePromptTemplate.from_template(query_str)
)
prepend_messages.append(
AIMessagePromptTemplate.from_template(response_str)
)
llm_predictor = ChatGPTLLMPredictor(
llm=ChatOpenAI(
temperature=0, model_name="gpt-3.5-turbo", top_p=0, max_tokens=num_outputs
),
prepend_messages=prepend_messages
)
prompt_helper = PromptHelper(context_window=4096, num_output=num_outputs, chunk_overlap_ratio=0.1, chunk_size_limit=None)
llama_logger = LlamaLogger()
service_context = ServiceContext.from_defaults(
llm_predictor=llm_predictor,
prompt_helper=prompt_helper,
llama_logger=llama_logger)
return service_context
step 2. query 1st question
query_engine = index.as_query_engine(
service_context=update_model("", ""),
)
response = query_engine.query(
"question 1"
)
print(response)
step 3. update query_engine and query 2nd question
## messages - "role": "user", and "role": "assistant"
query_engine = index.as_query_engine(
service_context=update_model("question 1", str(response)), ### update service_context for query_engine
)
response = query_engine.query(
"question 2"
)
Hi, @mintisan! I'm Dosu, and I'm helping the LlamaIndex team manage their backlog. I wanted to let you know that we are marking this issue as stale.
From what I understand, you requested a feature to add a history dialog to OpenAI's openai.ChatCompletion.create
message parameter list for the query_engine
. Disiok, one of the maintainers, mentioned that they are working on adding more chat engine implementations, which suggests that your feature request may be addressed in the future. You also provided a code snippet showing how you achieved the desired functionality using ChatGPTLLMPredictor
, which could serve as a potential workaround for now.
Before we close this issue, we wanted to check if it is still relevant to the latest version of the LlamaIndex repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.
Thank you for your contribution to the LlamaIndex repository!
Feature Description
The OpenAI interface has a
messages
parameter(as below), which retains the questions and answers of historical conversations.I don’t know if LlamaIndex can have a switch parameter to support this mode(native way). Or how can I implement with query_engine. (chat_engine with condense is not the way, It may change the current question content, which not what I want)
I didn’t find it in the relevant documents.
Reason
(chat_engine with condense is not the way, It may change the current question content, which not what I want)
Value of Feature
native way to OpenAI, more flexible, also some people need it.
relative issues