torvalds-dev / llama_index

LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
0 stars 0 forks source link

[Feature Request] How to allow multiple Langchain prompt template keys #10

Open Raghavan1988 opened 1 year ago

Raghavan1988 commented 1 year ago

Feature Description

As I understand it, Langchain supports multiple prompt template keys.

image

When GPT-Index is used for memory, this is reduced to one:

https://github.com/jerryjliu/gpt_index/blob/f6a985241f388a9b7513ef5405025e6e2324e01e/gpt_index/langchain_helpers/memory_wrapper.py#L22

It would be useful if multiple prompt template keys and GPT-Index could be used together.

Reason

No response

Value of Feature

No response

torvalds-dev commented 1 year ago

torvalds.dev is analyzing the ticket

torvalds-dev commented 1 year ago

The issue "[Feature Request] How to allow multiple Langchain prompt template keys" seems to be about allowing multiple keys to be used in the Langchain prompt templates.

Currently, the get_prompt_input_key function in the memory_wrapper.py file expects only one input key, as it raises a ValueError if more than one key is found. This function is used to get the key for the input prompt.

The prompt_templates.py file contains the templates for different types of prompts. Each template has a specific format and expects certain keys to be present in the input.

To allow multiple keys, the following actions could be taken:

  1. Modify the get_prompt_input_key function to handle multiple keys. This could involve returning a list of keys instead of a single key, and handling this list in the functions that call get_prompt_input_key.

  2. Update the prompt templates in prompt_templates.py to handle multiple keys. This could involve changing the string formatting to include multiple keys, and updating the PromptTemplate class to handle multiple keys.

  3. Update the GPTIndexMemory class and other related classes in memory_wrapper.py to handle multiple keys. This could involve changing how the input_key and output_key attributes are used, and updating methods like _get_prompt_input_key and load_memory_variables to handle multiple keys.

Relevant files to this issue are:

  1. llama_index/langchain_helpers/memory_wrapper.py
  2. llama_index/langchain_helpers/prompt_templates.py