run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
36.48k stars 5.21k forks source link

[Feature Request]: add history dialog to OpenAI's openai.ChatCompletion.create message parameter List for [query_engine] #6137

Closed mintisan closed 1 year ago

mintisan commented 1 year ago

Feature Description

The OpenAI interface has a messages parameter(as below), which retains the questions and answers of historical conversations.

I don’t know if LlamaIndex can have a switch parameter to support this mode(native way). Or how can I implement with query_engine. (chat_engine with condense is not the way, It may change the current question content, which not what I want)

I didn’t find it in the relevant documents.

# Note: you need to be using OpenAI Python v0.27.0 for the code below to work
import openai

openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ]
)

Reason

(chat_engine with condense is not the way, It may change the current question content, which not what I want)

Value of Feature

native way to OpenAI, more flexible, also some people need it.

relative issues

Disiok commented 1 year ago

Hey @mintisan thanks for raising this. We are in the process of adding more chat engine implementations, stay tuned on this!

mintisan commented 1 year ago

hi @Disiok , I used ChatGPTLLMPredictor and update query_engine after each query to achieve this function as follows, you can see if there is any problem.

step 1. define update_model

from llama_index.llm_predictor.chatgpt import ChatGPTLLMPredictor
from langchain.prompts.chat import SystemMessagePromptTemplate
from langchain.prompts.chat import HumanMessagePromptTemplate
from langchain.prompts.chat import AIMessagePromptTemplate

## messages - "role": "system",
prepend_messages = [
    SystemMessagePromptTemplate.from_template(
        "You are a system ......"
    ),
]

# append messages : user & assistant
def update_model(query_str, response_str):
    # use ChatGPT [beta]
    if query_str and response_str:
        prepend_messages.append(
            HumanMessagePromptTemplate.from_template(query_str)
        )
        prepend_messages.append(
            AIMessagePromptTemplate.from_template(response_str)
        )

    llm_predictor = ChatGPTLLMPredictor(
        llm=ChatOpenAI(
            temperature=0, model_name="gpt-3.5-turbo", top_p=0, max_tokens=num_outputs
        ),
        prepend_messages=prepend_messages
    )

    prompt_helper = PromptHelper(context_window=4096, num_output=num_outputs, chunk_overlap_ratio=0.1, chunk_size_limit=None)

    llama_logger = LlamaLogger()
    service_context = ServiceContext.from_defaults(
        llm_predictor=llm_predictor,
        prompt_helper=prompt_helper,
        llama_logger=llama_logger)

    return service_context

step 2. query 1st question

query_engine = index.as_query_engine(
    service_context=update_model("", ""),
)
response = query_engine.query(
    "question 1"
)
print(response)

step 3. update query_engine and query 2nd question

## messages - "role": "user", and "role": "assistant"
query_engine = index.as_query_engine(
    service_context=update_model("question 1", str(response)),       ### update service_context for query_engine
)

response = query_engine.query(
    "question 2"
)

image

dosubot[bot] commented 1 year ago

Hi, @mintisan! I'm Dosu, and I'm helping the LlamaIndex team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you requested a feature to add a history dialog to OpenAI's openai.ChatCompletion.create message parameter list for the query_engine. Disiok, one of the maintainers, mentioned that they are working on adding more chat engine implementations, which suggests that your feature request may be addressed in the future. You also provided a code snippet showing how you achieved the desired functionality using ChatGPTLLMPredictor, which could serve as a potential workaround for now.

Before we close this issue, we wanted to check if it is still relevant to the latest version of the LlamaIndex repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your contribution to the LlamaIndex repository!