hwchase17 / notion-qa

MIT License
2.13k stars 374 forks source link

Iterating over LLM models does not work in LangChain #28

Open yogeshhk opened 1 year ago

yogeshhk commented 1 year ago

Can LLMChain objects be stored and iterated over?

llms = [{'name': 'OpenAI', 'model': OpenAI(temperature=0)},
        {'name': 'Flan', 'model':  HuggingFaceHub(repo_id="google/flan-t5-xl", model_kwargs={"temperature": 1e-10})}]

for llm_dict in llms:
    llm_name = llm_dict['name']
    llm_model = llm_dict['model']
    chain = LLMChain(llm=llm_model, prompt=prompt)

The first LLM model runs well, but for the second iteration, gives following error:

    chain = LLMChain(llm=llm_model, prompt=prompt)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain
prompt
  value is not a valid dict (type=type_error.dict)

Am I missing something? in dictionary declarations?

More details at https://stackoverflow.com/questions/76110329/iterating-over-llm-models-does-not-work-in-langchain

TechnoRahmon commented 1 year ago

I have similar situation, here where I stored my llm object in separate file :

# Create an instance of OpenAI LLM with desired configuration
llm_davinci = OpenAI(
    model_name=models_names["completions-davinci"],
    temperature=0,
    max_tokens=256,
    top_p=1.0,

    frequency_penalty=0.0,
    presence_penalty=0.0,
    n=1,
    best_of=1,
    request_timeout=None
)

then I am using the llm_davinci instance in other function like this :

def ask_llm(query: str, filename: str):

    # prepare the prompt
    prompt = code_assistance.format(context="this is a test", command=query)
    tokens = tiktoken_len(prompt)
    print(f"prompt  : {prompt}")
    print(f"prompt tokens : {tokens}")

    # connect to the LLM
    llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)

    # run the LLM
    with get_openai_callback() as cb:
        response = llm_chain.run()

    return jsonify({'query': query,
                    'response': str(response),
                    'usage': cb})

the issue is with line :

    # connect to the LLM
    llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)

error : llm_chain = LLMChain(prompt=prompt, llm=llm_davinci) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for LLMChain prompt value is not a valid dict (type=type_error.dict)

any idea to solve this ?

yogeshhk commented 1 year ago

@TechnoRahmon in my case it was confusing with "prompt" variable... try changing "prompt" inside ask_llm() to something else like "llm_prompt"

TechnoRahmon commented 1 year ago

@yogeshhk Thank you for replying.

Actually, it has been solved by feeding the prompt as PromptTemplate type to the LLMChain. my issue was I passed the prompt as a string to the LLMChain, hence I changed it to the PromptTemplate type, It worked

    # prepare the prompt
    prompt = PromptTemplate(
        input_variables=give_assistance_input_variables,
        template=give_assistance_prompt
    )
    # connect to the LLM
    llm_chain = LLMChain(prompt=prompt, llm=llm_davinci)