langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
92.9k stars 14.89k forks source link

argument needs to be of type (SquadExample, dict) #15884

Closed tempdeltavalue closed 3 months ago

tempdeltavalue commented 8 months ago

Checked other resources

Example Code

model_name = "Intel/dynamic_tinybert"

tokenizer = AutoTokenizer.from_pretrained(model_name, padding=True, truncation=True, max_length=512)

question_answerer = pipeline(
    "question-answering",
    model=model_name,
    tokenizer=tokenizer,
    return_tensors='pt'
)

llm = HuggingFacePipeline(
    pipeline=question_answerer,
    model_kwargs={"temperature": 0.7, "max_length": 50},
)

prompt_template = """
As literature critic answer me

question: {question}

context: {context}
"""

prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])

chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=retriever,
    chain_type_kwargs = {"prompt": prompt})

question = "Who is Hamlet ?"
answer = chain.invoke({"query": question})  # issue here <--
print(answer)

Description

I tried to implement simple RetrievalQA from with langchain_chain.faiss vector search but I've faced with such assert, argument needs to be of type (SquadExample, dict)

Here, there is an issue. answer = chain.invoke({"query": question})

Thank you in advance.

System Info

Windows 10, python 3.11, langchain 0.1.0

Related Components

dosubot[bot] commented 8 months ago

🤖

Hello @tempdeltavalue! I'm Dosu, a friendly bot here to assist you with your LangChain queries, bugs, and even help you learn how to contribute! While we're waiting for a human maintainer, I'm here to provide you with some initial help. Let's tackle this issue together!

Based on the code you've provided and the error message you're encountering, it seems like the issue might be with the way you're invoking the RetrievalQA chain. The invoke method expects a dictionary as an argument, where the key is the input_key (which is "query" by default) and the value is the question you want to ask.

In your case, you're correctly passing a dictionary with a 'query' key, but the error message suggests that the RetrievalQA chain is expecting a SquadExample or a dictionary as an argument. This might be due to the way the RetrievalQA chain is implemented in LangChain.

Here's how you should be invoking the RetrievalQA chain:

question = "Who is Hamlet ?"
answer = chain.invoke({"query": question})
print(answer)

If you're still encountering the same error, it might be helpful to check the implementation of the RetrievalQA chain in LangChain. Specifically, you might want to look at the _call and _acall methods in the BaseRetrievalQA class, which are the methods that are likely being called when you invoke the RetrievalQA chain.

Here is the relevant code:

def _call(
    self,
    inputs: Dict[str, Any],
    run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
    ...

async def _acall(
    self,
    inputs: Dict[str, Any],
    run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, Any]:
    ...

In these methods, inputs is a dictionary where the key is a string and the value can be of any type. The key is expected to be the input_key of the class, which is set to "query" by default. The value associated with this key is treated as the question for which the model retrieves relevant documents and generates an answer.

You can find these methods in the libs/langchain/langchain/chains/retrieval_qa/base.py file in the LangChain repository.

I hope this helps! If you have any other questions or if something is unclear, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

tempdeltavalue commented 8 months ago

but this is exactly what I'm doing prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])

chain = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=retriever, chain_type_kwargs = {"prompt": prompt})

question = "Who is Hamlet ?" answer = chain.invoke({"query": question})

What should I change ?

eyurtsev commented 8 months ago

@tempdeltavalue Could you see if you can produce a minimal running example including imports, and tracestack. It'll make it easier for another person to copy the code and run it to help resolve your issue

tempdeltavalue commented 8 months ago

@eyurtsev I'm not sure what you mean by a minimal example.

here's I tried to make RetrievalQA using hugging face local models https://github.com/tempdeltavalue/temp_l/blob/main/main.py

code

from langchain_community.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter, CharacterTextSplitter

embedding_model_id = "sentence-transformers/all-MiniLM-L6-v2"

embeddings = HuggingFaceEmbeddings(
    model_name=embedding_model_id,
)

def generate_embeddings():
    chunk_size = 2048

    with open("data/shakespeare.txt") as f:
        text = f.read()

        text_splitter = CharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=0)
        pages = text_splitter.split_text(text)

        text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=100)
        texts = text_splitter.create_documents(pages)

    print(len(texts))

    embeddings_db = FAISS.from_documents(texts, embeddings)
    embeddings_db.save_local("faiss_index")

print("start load embeddings")
embeddings_db = FAISS.load_local("faiss_index", embeddings)
print("finish load embeddings")

retriever = embeddings_db.as_retriever(search_kwargs={"k": 10})

from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQA
from transformers import pipeline
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoTokenizer

model_name = "Intel/dynamic_tinybert"

tokenizer = AutoTokenizer.from_pretrained(model_name, padding=True, truncation=True, max_length=512)

question_answerer = pipeline(
    "question-answering",
    model=model_name,
    tokenizer=tokenizer,
    return_tensors='pt'
)

llm = HuggingFacePipeline(
    pipeline=question_answerer,
    model_kwargs={"temperature": 0.7, "max_length": 50},
)

prompt_template = """
As literature critic answer me

question: {question}

context: {context}
"""

prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])

chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=retriever,
    chain_type_kwargs = {"prompt": prompt})

question = "Who is Hamlet ?"
answer = chain.invoke({"query": question})
print(answer)

traceback

Traceback (most recent call last):
  File "C:\Users\tempdelta\Desktop\LLM_ex\main.py", line 86, in <module>
    answer = chain.invoke({"query": question})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 162, in invoke
    raise e
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\retrieval_qa\base.py", line 144, in _call
    answer = self.combine_documents_chain.run(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 543, in run
    return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 363, in __call__
    return self.invoke(
           ^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 162, in invoke
    raise e
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\combine_documents\base.py", line 136, in _call
    output, extra_return_dict = self.combine_docs(
                                ^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\combine_documents\stuff.py", line 244, in combine_docs
    return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\llm.py", line 293, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
    return wrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 363, in __call__
    return self.invoke(
           ^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 162, in invoke
    raise e
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\base.py", line 156, in invoke
    self._call(inputs, run_manager=run_manager)
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\llm.py", line 103, in _call
    response = self.generate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\chains\llm.py", line 115, in generate
    return self.llm.generate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\llms.py", line 525, in generate_prompt
    return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\llms.py", line 698, in generate
    output = self._generate_helper(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\llms.py", line 562, in _generate_helper
    raise e
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\language_models\llms.py", line 549, in _generate_helper
    self._generate(
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\llms\huggingface_pipeline.py", line 203, in _generate
    responses = self.pipeline(batch_prompts)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\transformers\pipelines\question_answering.py", line 391, in __call__
    examples = self._args_parser(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\transformers\pipelines\question_answering.py", line 219, in __call__
    inputs[i] = self.normalize(item)
                ^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\tempdelta\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\transformers\pipelines\question_answering.py", line 172, in normalize
    raise ValueError(f"{item} argument needs to be of type (SquadExample, dict)")
ValueError: 
tempdeltavalue commented 8 months ago

@dosu-bot

Any suggestions ?

dosubot[bot] commented 8 months ago

🤖

Hello there! I'd be happy to help, but I'll need a bit more information in order to provide relevant suggestions. What specifically are you looking for suggestions on? Whether it's related to software development, technical support, or anything else, feel free to ask and I'll do my best to assist you.


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

christiancopeland commented 8 months ago

New here, so apologies for any incorrect formatting in my comment. I'm trying to get 'tiiuae/falcon-7b-instruct/falcon-7b-instruct' to run and I've gotten one step past your trace.

Here's my trace now: KeyError: 'You need to provide a dictionary with keys {question:..., context:...}'

Looks like your main issue is the format of your inputs going to the model. Make sure it's a dictionary with a question and context, then you should be good to go. I reformatted my input and got no errors, but also got no output from the model.

Here's my code so far:

from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from transformers import AutoTokenizer, AutoModelForCausalLM, FalconForQuestionAnswering, FalconConfig import transformers import torch

model = "tiiuae/falcon-7b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)

model = FalconForQuestionAnswering.from_pretrained(model, cache_dir='./workspaceFalcon/', torch_dtype=torch.bfloat16, device_map="auto", offload_folder="offload") ## cache the model weights

model.eval()

config = FalconConfig()

pipeline = transformers.pipeline( "question-answering", model=model, config=config, tokenizer=tokenizer, device_map="auto", max_length=400, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)

pipeline({'context': 'A man went to a shop and bought a car.', 'question': 'What did the man buy?'})

christiancopeland commented 8 months ago

To be more direct, I think once you change the line : answer = chain.invoke({"query": question}) to answer = chain.invoke({"question": question, "context": context}), you should be good to go.

dosubot[bot] commented 8 months ago

🤖

Based on the error message you're encountering, it seems like the function you're calling is expecting an argument of type SquadExample or dict, but it's receiving a different type.

In your code, you're calling chain.invoke({"query": question}) where chain is an instance of RetrievalQA. The invoke method is likely a method of the parent Chain class that RetrievalQA inherits from.

The _call and _acall methods in RetrievalQA expect a dictionary with a key query containing the question string, which matches the input you're providing to invoke. However, the error message suggests that somewhere in the process, a function is being called that expects a SquadExample or dict.

Here are a few things you might want to check:

  1. Ensure that the invoke method of the Chain class (or its parent classes) doesn't require a SquadExample or a different kind of dict as input.
  2. Check the implementation of the RetrievalQA class you're using, especially the _get_docs and _aget_docs methods. Make sure they're correctly implemented and don't require a SquadExample or a different kind of dict as input.
  3. If you're using a custom BaseRetriever in your RetrievalQA instance, check their get_relevant_documents, aget_relevant_documents, similarity_search, and max_marginal_relevance_search methods. Ensure they don't require a SquadExample or a different kind of dict as input.

Unfortunately, without the full context of your code and the Chain class definition, it's hard to provide a definitive solution. If none of these suggestions help, could you please provide more context or the exact code you're using to instantiate and call chain.invoke?

For reference, here is the source code for the RetrievalQA class in the LangChain repository.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

AliRezaBeigy commented 8 months ago

As i understand the problem is RetrievalQA which isn't pass data correctly to the LLM, LLM get data in {'question': question, 'context': context} structure, but RetrievalQA gives him list of string

ajeetkharel commented 8 months ago

The HuggingFacePipeline API doesn't support "question-answering" pipelines. Please look for alternate methods.

image

ajeetkharel commented 8 months ago

The HuggingFacePipeline API doesn't support "question-answering" pipelines. Please look for alternate methods.

image

Alternatively you can use a "text2text-generation" models to achieve the task.

tempdeltavalue commented 7 months ago

fyi, you can probably do it like that

service_context = ServiceContext.from_defaults(embed_model=embed_model, llm=None) # HuggingFaceEmbedding

index = VectorStoreIndex(
    nodes, 
    service_context=service_context, 
    show_progress=True
)

retriever = index.as_retriever(similarity_top_k=5)

new_query_engine = loaded_index.as_query_engine()
new_query_engine.query("Who am I?")  
ajeetkharel commented 7 months ago

@tempdeltavalue Won't this only return you the relevant documents and not the actual llm result that you wanted?

SjoerdGn commented 7 months ago

I ran into the same problem, as did this blogpost: https://medium.com/international-school-of-ai-data-science/implementing-rag-with-langchain-and-hugging-face-28e3ea66c5f7 The problem is - I think - that QuestionAnsweringArgumentHandler from the HuggingFace transformers package, and specifically its normalize function, wants either a SquadExample object (with the context and question as separate attributes if I'm correct) or , or a dict in the form of {"context":"<<context>>", "question":"<<question>>"}. However, the functions it goes through before it gets to this QuestionAnsweringArgumentHandler, self.llm.generate_prompt in the class LLMChain, creates a prompt (first packaged in an object), i.e. a string, from the context and question.

It seems like it's not an easy fix in the langchain package, so I'm now looking into a workaround in the transformers package.

SjoerdGn commented 7 months ago

If you fix the problem mentioned in my previous answer:

I ran into the same problem, as did this blogpost: https://medium.com/international-school-of-ai-data-science/implementing-rag-with-langchain-and-hugging-face-28e3ea66c5f7 The problem is - I think - that QuestionAnsweringArgumentHandler from the HuggingFace transformers package, and specifically its normalize function, wants either a SquadExample object (with the context and question as separate attributes if I'm correct) or , or a dict in the form of {"context":"<<context>>", "question":"<<question>>"}. However, the functions it goes through before it gets to this QuestionAnsweringArgumentHandler, self.llm.generate_prompt in the class LLMChain, creates a prompt (first packaged in an object), i.e. a string, from the context and question. ....

by for example putting the following block of code

            if isinstance(item, str):
               # split the formatted prompts back into parts
               # to retrieve the context and question separately
                item_split = item\
                    .replace(
                        'answer the question:',
                        '\n------------\n'
                    )\
                    .split('\n------------\n')
                item = {
                    'context':item_split[1],
                    'question':item_split[3]
                }

between line 218 and 219 of _questionanswering.py, you're back to the point @ajeetkharel mentioned (see quoted comment below); 'question-answering' pipelines are not supported: the error at line 222 of _huggingfacepipeline.py is raised. It seems from the code that this is not some kind of 'just adding the word question-answering'-thing, but that the 'question-answering' pipeline really works differently from what is supported.

The HuggingFacePipeline API doesn't support "question-answering" pipelines. Please look for alternate methods.

image

Anushagudipati commented 7 months ago

ValueError while running RetrievalQA.from_chain_type(llm=llm, chain_type="refine", retriever=retriever, return_source_documents=False)

Specify the model name you want to use

model_name = "Intel/dynamic_tinybert"

Load the tokenizer associated with the specified model

tokenizer = AutoTokenizer.from_pretrained(model_name, padding=True, truncation=True, max_length=512)

Define a question-answering pipeline using the model and tokenizer

question_answerer = pipeline( "question-answering", model=model_name, tokenizer=tokenizer, return_tensors='pt' )

Create an instance of the HuggingFacePipeline, which wraps the question-answering pipeline

with additional model-specific arguments (temperature and max_length)

llm = HuggingFacePipeline( pipeline=question_answerer, model_kwargs={"temperature": 0.7, "max_length": 512}, )

Create a retriever object from the 'db' with a search configuration where it retrieves up to 4 relevant splits/documents.

retriever = db.as_retriever(search_kwargs={"k": 4})

Create a question-answering instance (qa) using the RetrievalQA class.

It's configured with a language model (llm), a chain type "refine," the retriever we created, and an option to not return source documents.

qa = RetrievalQA.from_chain_type(llm=llm, chain_type="refine", retriever=retriever, return_source_documents=False)

print(qa.input_keys) question = "Who is Julius Steele Barnes?"

result = qa.invoke({"query": question})

result = qa({"query": question, "chat_history": ""}) print(result["result"])

ValueError Traceback (most recent call last)

in () 2 # result = qa.invoke({"query": question}) 3 print(qa.input_keys) ----> 4 result = qa({"query": question, "chat_history": ""}) 5 print("hi") 6 print(result,"*") 27 frames /usr/local/lib/python3.10/dist-packages/transformers/pipelines/question_answering.py in normalize(self, item) 170 171 return QuestionAnsweringPipeline.create_sample(**item) --> 172 raise ValueError(f"{item} argument needs to be of type (SquadExample, dict)") 173 174 def __call__(self, *args, **kwargs): ValueError: Context information is below. ------------ "Julius Steele Barnes (23 February 1792 \u2013 12 November 1870) was an American physician. Besides being a skillful practitioner, and devoted to his calling, he also labored heartily for the social good of the community. He served one term as Connecticut State Senator, and held for a time the office of Judge of Probate." ------------ Given the context information and not prior knowledge, answer the question: Who is Julius Steele Barnes? argument needs to be of type (SquadExample, dict)
tawhidwasik08 commented 7 months ago

Downgrading to langchain==0.1.7 does not show this error for me (at least for now).