langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.24k stars 15.46k forks source link

TypeError: call() missing 1 required positional argument: 'prompt' #27451

Open 861482002 opened 1 month ago

861482002 commented 1 month ago

Checked other resources

Example Code

when i using langchain_community's ChatTongyi

from langchain_community.chat_models import ChatTongyi
chat_model = ChatTongyi(model='qwen-turbo')
prompt = HumanMessage('你好')
chat_model([prompt])

Error Message and Stack Trace (if applicable)


TypeError Traceback (most recent call last) Cell In[4], line 2 1 prompt = HumanMessage('你好') ----> 2 chat_model([prompt])

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_core_api\deprecation.py:182, in deprecated..deprecate..warning_emitting_wrapper(*args, *kwargs) 180 warned = True 181 emit_warning() --> 182 return wrapped(args, **kwargs)

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_core\language_models\chat_models.py:1017, in BaseChatModel.call(self, messages, stop, callbacks, kwargs) 1009 @deprecated("0.1.7", alternative="invoke", removal="1.0") 1010 def call( 1011 self, (...) 1015 kwargs: Any, 1016 ) -> BaseMessage: -> 1017 generation = self.generate( 1018 [messages], stop=stop, callbacks=callbacks, **kwargs 1019 ).generations[0][0] 1020 if isinstance(generation, ChatGeneration): 1021 return generation.message

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_core\language_models\chat_models.py:643, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs) 641 if run_managers: 642 run_managers[i].on_llm_error(e, response=LLMResult(generations=[])) --> 643 raise e 644 flattened_outputs = [ 645 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item] 646 for res in results 647 ] 648 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_core\language_models\chat_models.py:633, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, kwargs) 630 for i, m in enumerate(messages): 631 try: 632 results.append( --> 633 self._generate_with_cache( 634 m, 635 stop=stop, 636 run_manager=run_managers[i] if run_managers else None, 637 kwargs, 638 ) 639 ) 640 except BaseException as e: 641 if run_managers:

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_core\language_models\chat_models.py:851, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, kwargs) 849 else: 850 if inspect.signature(self._generate).parameters.get("run_manager"): --> 851 result = self._generate( 852 messages, stop=stop, run_manager=run_manager, kwargs 853 ) 854 else: 855 result = self._generate(messages, stop=stop, **kwargs)

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_community\chat_models\tongyi.py:650, in ChatTongyi._generate(self, messages, stop, run_manager, kwargs) 645 params: Dict[str, Any] = self._invocation_params( 646 messages=messages, stop=stop, kwargs 647 ) 648 prompt = params.get("messages") --> 650 resp = self.completion_with_retry(params) #原始 651 generations.append( 652 ChatGeneration(self._chat_generation_from_qwen_resp(resp)) 653 ) 654 return ChatResult( 655 generations=generations, 656 llm_output={ 657 "model_name": self.model_name, 658 }, 659 )

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_community\chat_models\tongyi.py:534, in ChatTongyi.completion_with_retry(self, kwargs) 530 resp = self.client.call(_kwargs) 531 return check_response(resp) --> 534 return _completion_with_retry(**kwargs)

File D:\Anaconda\envs\python3.9\lib\site-packages\tenacity__init__.py:336, in BaseRetrying.wraps..wrapped_f(*args, *kw) 334 copy = self.copy() 335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined] --> 336 return copy(f, args, **kw)

File D:\Anaconda\envs\python3.9\lib\site-packages\tenacity__init.py:475, in Retrying.call__(self, fn, *args, **kwargs) 473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 474 while True: --> 475 do = self.iter(retry_state=retry_state) 476 if isinstance(do, DoAttempt): 477 try:

File D:\Anaconda\envs\python3.9\lib\site-packages\tenacity__init__.py:376, in BaseRetrying.iter(self, retry_state) 374 result = None 375 for action in self.iter_state.actions: --> 376 result = action(retry_state) 377 return result

File D:\Anaconda\envs\python3.9\lib\site-packages\tenacity__init__.py:398, in BaseRetrying._post_retry_check_actions..(rs) 396 def _post_retry_check_actions(self, retry_state: "RetryCallState") -> None: 397 if not (self.iter_state.is_explicit_retry or self.iter_state.retry_run_result): --> 398 self._add_action_func(lambda rs: rs.outcome.result()) 399 return 401 if self.after is not None:

File D:\Anaconda\envs\python3.9\lib\concurrent\futures_base.py:439, in Future.result(self, timeout) 437 raise CancelledError() 438 elif self._state == FINISHED: --> 439 return self.__get_result() 441 self._condition.wait(timeout) 443 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File D:\Anaconda\envs\python3.9\lib\concurrent\futures_base.py:391, in Future.__get_result(self) 389 if self._exception: 390 try: --> 391 raise self._exception 392 finally: 393 # Break a reference cycle with the exception in self._exception 394 self = None

File D:\Anaconda\envs\python3.9\lib\site-packages\tenacity__init.py:478, in Retrying.call__(self, fn, *args, *kwargs) 476 if isinstance(do, DoAttempt): 477 try: --> 478 result = fn(args, **kwargs) 479 except BaseException: # noqa: B902 480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]

File D:\Anaconda\envs\python3.9\lib\site-packages\langchain_community\chat_models\tongyi.py:530, in ChatTongyi.completion_with_retry.._completion_with_retry(_kwargs) 528 @retry_decorator 529 def _completion_with_retry(_kwargs: Any) -> Any: --> 530 resp = self.client.call(**_kwargs) 531 return check_response(resp)

TypeError: call() missing 1 required positional argument: 'prompt'

Description

when i using langchain_community's ChatTongyi and i have send the prompt but there still has error, why!!!

System Info

System Information

OS: Windows OS Version: 10.0.22631 Python Version: 3.9.18 (main, Sep 11 2023, 14:09:26) [MSC v.1916 64 bit (AMD64)]

Package Information

langchain_core: 0.3.10 langchain: 0.3.3 langchain_community: 0.3.2 langsmith: 0.1.135 langchain_chroma: 0.1.4 langchain_huggingface: 0.1.0 langchain_openai: 0.2.2 langchain_text_splitters: 0.3.0 langgraph: 0.2.38

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.9.3 async-timeout: 4.0.3 chromadb: 0.5.15 dataclasses-json: 0.5.14 fastapi: 0.115.0 httpx: 0.27.2 huggingface-hub: 0.23.4 jsonpatch: 1.33 langgraph-checkpoint: 2.0.1 langgraph-sdk: 0.1.33 numpy: 1.26.4 openai: 1.51.2 orjson: 3.10.7 packaging: 24.0 pydantic: 2.9.2 pydantic-settings: 2.5.2 PyYAML: 6.0.1 requests: 2.31.0 requests-toolbelt: 1.0.0 sentence-transformers: 3.2.0 SQLAlchemy: 2.0.35 tenacity: 8.5.0 tiktoken: 0.8.0 tokenizers: 0.19.1 transformers: 4.42.4 typing-extensions: 4.11.0

keenborder786 commented 1 month ago
from langchain_core.messages import HumanMessage, SystemMessage
chat_model = ChatTongyi(model='qwen-turbo')
messages = [
    SystemMessage(
        content="你是一个乐于助人的助手"
    ),
    HumanMessage(
        content="你好"
    ),
]
chat_model(messages)

Try this!

861482002 commented 1 month ago

@keenborder786 Thank you very much for your answer, I also tried the code you sent, but the problem still exists, but I also tried the model agent of ChatGLM, and found that it can run. I guess the cause of the bug may be that tongyi's team did not design well when adapting langchain