langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
93.38k stars 15.03k forks source link

Retrieval Question/Answering Example not working in 0.0.200 #6162

Closed avi0gaur closed 1 year ago

avi0gaur commented 1 year ago

System Info

The problem seem to be in below code:

exception: dict is not iterable

Working version: langchain==0.0.164

usecase : https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html

Issue in below method:

def dict(self, **kwargs: Any) -> Dict:
    """Return a dictionary of the LLM."""
    starter_dict = dict(self._identifying_params)
    starter_dict["_type"] = self._llm_type
    return starter_dict

Who can help?

No response

Information

Related Components

Reproduction

https://python.langchain.com/en/latest/modules/chains/index_examples/vector_db_qa.html

Try these steps

Expected behavior

It should work as per the example.

hwchase17 commented 1 year ago

what LLM are you using?

mukut03 commented 1 year ago

I'd like to tackle this issue

avi0gaur commented 1 year ago

What LLM are you using?

open AI's gpt 3.5.

LLM isn't the problem as it works fine by downgrading the langchain version to langchain==0.0.164.

I would urge anyone to run the above use case in the Colab notebook to see if you can replicate the issue.

The above-mentioned use case link is not available now. Is it resolved in any new version?

avi0gaur commented 1 year ago
  File "/Users/avinashgaur/opt/anaconda3/envs/jarvis1/lib/python3.10/site-packages/langchain/llms/base.py", line 448, in dict
    starter_dict = dict(self._identifying_params)
TypeError: 'method' object is not iterable

llm/base.py

    def dict(self, **kwargs: Any) -> Dict:
        """Return a dictionary of the LLM."""
        starter_dict = dict(self._identifying_params)
        starter_dict["_type"] = self._llm_type
        return starter_dict

The problem is in the above piece of code.

avi0gaur commented 1 year ago

I found the issue, I was creating a custom llm for calling the rest service.

class VicunaLLM_7(LLM):
    """Vicuna LLM."""

    @property
    def _llm_type(self) -> str:
        return "custom"

    def _call(self, prompt: str, stop: Optional[List[str]] = None) -> str:
        """Call the LLM."""

        request = {
            "model": "fastchat-t5-3b-v1.0",
            "messages": [{"role": "user", "content": prompt}],
            "max_tokens": 1024
        }

        response = requests.post(URL_7, json=request)

        if response.status_code == 200:
            response.raise_for_status()
            result = response.json()['choices'][0]["message"]['content']
            print("llm response: \n", result)
            return result
        print()
        return None
    **@property** //This part is mandatory in the newer version > 0.0.164
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {
            "model": "fastchat-t5-3b-v1.0",
            "vicuna":"7B"
        }

Please raise the insightful exception.