langchain-ai / langchain

šŸ¦œšŸ”— Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.89k stars 15.37k forks source link

langchain-mistralai: client attribute not recognized #21007

Closed Sachin-Bhat closed 6 months ago

Sachin-Bhat commented 6 months ago

Checked other resources

Example Code

import httpx
from langchain_core.output_parsers import StrOutputParser
from langchain_mistralai import ChatMistralAI

prompt = PromptTemplate(input_variables=["alpha", "beta"],
                        template=("""lorem ipsum sit amet dolor: '{alpha}', 
                                     generate additional lorem ipsum {beta} times.
                                     For example: if there are alpha lorem ipsum, the final lorem ipsum must be beta. Output: """))

chain = (
    prompt | ChatMistralAI(temperature=0, model="mixtral-8x7b-instruct-v01",
                endpoint='https://some-openai-compatible-endpoint.com/v1',
                api_key="whatever",
                client=httpx.Client(verify=False),
                max_tokens=8000,
                safe_mode=True,
                streaming=True) | StrOutputParser() | (lambda x: x.split("\n"))
)

alpha = "lorem ipsum"
beta = 4
output = chain.invoke({"alpha": alpha, "beta": beta})
output

Error Message and Stack Trace (if applicable)

{
    "name": "ConnectError",
    "message": "[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)",
    "stack": "---------------------------------------------------------------------------
ConnectError                              Traceback (most recent call last)
File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:69, in map_httpcore_exceptions()
     68 try:
---> 69     yield
     70 except Exception as exc:

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:233, in HTTPTransport.handle_request(self, request)
    232 with map_httpcore_exceptions():
--> 233     resp = self._pool.handle_request(req)
    235 assert isinstance(resp.stream, typing.Iterable)

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection_pool.py:216, in ConnectionPool.handle_request(self, request)
    215     self._close_connections(closing)
--> 216     raise exc from None
    218 # Return the response. Note that in this case we still have to manage
    219 # the point at which the response is closed.

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection_pool.py:196, in ConnectionPool.handle_request(self, request)
    194 try:
    195     # Send the request on the assigned connection.
--> 196     response = connection.handle_request(
    197         pool_request.request
    198     )
    199 except ConnectionNotAvailable:
    200     # In some cases a connection may initially be available to
    201     # handle a request, but then become unavailable.
    202     #
    203     # In this case we clear the connection and try again.

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection.py:99, in HTTPConnection.handle_request(self, request)
     98     self._connect_failed = True
---> 99     raise exc
    101 return self._connection.handle_request(request)

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection.py:76, in HTTPConnection.handle_request(self, request)
     75 if self._connection is None:
---> 76     stream = self._connect(request)
     78     ssl_object = stream.get_extra_info(\"ssl_object\")

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpcore\\_sync\\connection.py:154, in HTTPConnection._connect(self, request)
    153 with Trace(\"start_tls\", logger, request, kwargs) as trace:
--> 154     stream = stream.start_tls(**kwargs)
    155     trace.return_value = stream

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpcore\\_backends\\sync.py:152, in SyncStream.start_tls(self, ssl_context, server_hostname, timeout)
    148 exc_map: ExceptionMapping = {
    149     socket.timeout: ConnectTimeout,
    150     OSError: ConnectError,
    151 }
--> 152 with map_exceptions(exc_map):
    153     try:

File ~\\scoop\\persist\\rye\\py\\cpython@3.11.8\\Lib\\contextlib.py:158, in _GeneratorContextManager.__exit__(self, typ, value, traceback)
    157 try:
--> 158     self.gen.throw(typ, value, traceback)
    159 except StopIteration as exc:
    160     # Suppress StopIteration *unless* it's the same exception that
    161     # was passed to throw().  This prevents a StopIteration
    162     # raised inside the \"with\" statement from being suppressed.

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpcore\\_exceptions.py:14, in map_exceptions(map)
     13     if isinstance(exc, from_exc):
---> 14         raise to_exc(exc) from exc
     15 raise

ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)

The above exception was the direct cause of the following exception:

ConnectError                              Traceback (most recent call last)
Cell In[7], line 3
      1 alpha = \"lorem ipsum\"
      2 beta = 4
----> 3 output = chain.invoke({\"alpha\": alpha, \"beta\": beta})
      4 output

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_core\\runnables\\base.py:2499, in RunnableSequence.invoke(self, input, config)
   2497 try:
   2498     for i, step in enumerate(self.steps):
-> 2499         input = step.invoke(
   2500             input,
   2501             # mark each step as a child run
   2502             patch_config(
   2503                 config, callbacks=run_manager.get_child(f\"seq:step:{i+1}\")
   2504             ),
   2505         )
   2506 # finish the root run
   2507 except BaseException as e:

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:158, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
    147 def invoke(
    148     self,
    149     input: LanguageModelInput,
   (...)
    153     **kwargs: Any,
    154 ) -> BaseMessage:
    155     config = ensure_config(config)
    156     return cast(
    157         ChatGeneration,
--> 158         self.generate_prompt(
    159             [self._convert_input(input)],
    160             stop=stop,
    161             callbacks=config.get(\"callbacks\"),
    162             tags=config.get(\"tags\"),
    163             metadata=config.get(\"metadata\"),
    164             run_name=config.get(\"run_name\"),
    165             run_id=config.pop(\"run_id\", None),
    166             **kwargs,
    167         ).generations[0][0],
    168     ).message

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:560, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    552 def generate_prompt(
    553     self,
    554     prompts: List[PromptValue],
   (...)
    557     **kwargs: Any,
    558 ) -> LLMResult:
    559     prompt_messages = [p.to_messages() for p in prompts]
--> 560     return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:421, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    419         if run_managers:
    420             run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 421         raise e
    422 flattened_outputs = [
    423     LLMResult(generations=[res.generations], llm_output=res.llm_output)  # type: ignore[list-item]
    424     for res in results
    425 ]
    426 llm_output = self._combine_llm_outputs([res.llm_output for res in results])

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:411, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
    408 for i, m in enumerate(messages):
    409     try:
    410         results.append(
--> 411             self._generate_with_cache(
    412                 m,
    413                 stop=stop,
    414                 run_manager=run_managers[i] if run_managers else None,
    415                 **kwargs,
    416             )
    417         )
    418     except BaseException as e:
    419         if run_managers:

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:632, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
    630 else:
    631     if inspect.signature(self._generate).parameters.get(\"run_manager\"):
--> 632         result = self._generate(
    633             messages, stop=stop, run_manager=run_manager, **kwargs
    634         )
    635     else:
    636         result = self._generate(messages, stop=stop, **kwargs)

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_mistralai\\chat_models.py:452, in ChatMistralAI._generate(self, messages, stop, run_manager, stream, **kwargs)
    448 if should_stream:
    449     stream_iter = self._stream(
    450         messages, stop=stop, run_manager=run_manager, **kwargs
    451     )
--> 452     return generate_from_stream(stream_iter)
    454 message_dicts, params = self._create_message_dicts(messages, stop)
    455 params = {**params, **kwargs}

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_core\\language_models\\chat_models.py:67, in generate_from_stream(stream)
     64 \"\"\"Generate from a stream.\"\"\"
     66 generation: Optional[ChatGenerationChunk] = None
---> 67 for chunk in stream:
     68     if generation is None:
     69         generation = chunk

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_mistralai\\chat_models.py:499, in ChatMistralAI._stream(self, messages, stop, run_manager, **kwargs)
    496 params = {**params, **kwargs, \"stream\": True}
    498 default_chunk_class: Type[BaseMessageChunk] = AIMessageChunk
--> 499 for chunk in self.completion_with_retry(
    500     messages=message_dicts, run_manager=run_manager, **params
    501 ):
    502     if len(chunk[\"choices\"]) == 0:
    503         continue

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\langchain_mistralai\\chat_models.py:366, in ChatMistralAI.completion_with_retry.<locals>._completion_with_retry.<locals>.iter_sse()
    365 def iter_sse() -> Iterator[Dict]:
--> 366     with connect_sse(
    367         self.client, \"POST\", \"/chat/completions\", json=kwargs
    368     ) as event_source:
    369         _raise_on_error(event_source.response)
    370         for event in event_source.iter_sse():

File ~\\scoop\\persist\\rye\\py\\cpython@3.11.8\\Lib\\contextlib.py:137, in _GeneratorContextManager.__enter__(self)
    135 del self.args, self.kwds, self.func
    136 try:
--> 137     return next(self.gen)
    138 except StopIteration:
    139     raise RuntimeError(\"generator didn't yield\") from None

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx_sse\\_api.py:54, in connect_sse(client, method, url, **kwargs)
     51 headers[\"Accept\"] = \"text/event-stream\"
     52 headers[\"Cache-Control\"] = \"no-store\"
---> 54 with client.stream(method, url, headers=headers, **kwargs) as response:
     55     yield EventSource(response)

File ~\\scoop\\persist\\rye\\py\\cpython@3.11.8\\Lib\\contextlib.py:137, in _GeneratorContextManager.__enter__(self)
    135 del self.args, self.kwds, self.func
    136 try:
--> 137     return next(self.gen)
    138 except StopIteration:
    139     raise RuntimeError(\"generator didn't yield\") from None

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_client.py:870, in Client.stream(self, method, url, content, data, files, json, params, headers, cookies, auth, follow_redirects, timeout, extensions)
    847 \"\"\"
    848 Alternative to `httpx.request()` that streams the response body
    849 instead of loading it into memory at once.
   (...)
    855 [0]: /quickstart#streaming-responses
    856 \"\"\"
    857 request = self.build_request(
    858     method=method,
    859     url=url,
   (...)
    868     extensions=extensions,
    869 )
--> 870 response = self.send(
    871     request=request,
    872     auth=auth,
    873     follow_redirects=follow_redirects,
    874     stream=True,
    875 )
    876 try:
    877     yield response

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_client.py:914, in Client.send(self, request, stream, auth, follow_redirects)
    906 follow_redirects = (
    907     self.follow_redirects
    908     if isinstance(follow_redirects, UseClientDefault)
    909     else follow_redirects
    910 )
    912 auth = self._build_request_auth(request, auth)
--> 914 response = self._send_handling_auth(
    915     request,
    916     auth=auth,
    917     follow_redirects=follow_redirects,
    918     history=[],
    919 )
    920 try:
    921     if not stream:

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_client.py:942, in Client._send_handling_auth(self, request, auth, follow_redirects, history)
    939 request = next(auth_flow)
    941 while True:
--> 942     response = self._send_handling_redirects(
    943         request,
    944         follow_redirects=follow_redirects,
    945         history=history,
    946     )
    947     try:
    948         try:

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_client.py:979, in Client._send_handling_redirects(self, request, follow_redirects, history)
    976 for hook in self._event_hooks[\"request\"]:
    977     hook(request)
--> 979 response = self._send_single_request(request)
    980 try:
    981     for hook in self._event_hooks[\"response\"]:

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_client.py:1015, in Client._send_single_request(self, request)
   1010     raise RuntimeError(
   1011         \"Attempted to send an async request with a sync Client instance.\"
   1012     )
   1014 with request_context(request=request):
-> 1015     response = transport.handle_request(request)
   1017 assert isinstance(response.stream, SyncByteStream)
   1019 response.request = request

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:232, in HTTPTransport.handle_request(self, request)
    218 assert isinstance(request.stream, SyncByteStream)
    220 req = httpcore.Request(
    221     method=request.method,
    222     url=httpcore.URL(
   (...)
    230     extensions=request.extensions,
    231 )
--> 232 with map_httpcore_exceptions():
    233     resp = self._pool.handle_request(req)
    235 assert isinstance(resp.stream, typing.Iterable)

File ~\\scoop\\persist\\rye\\py\\cpython@3.11.8\\Lib\\contextlib.py:158, in _GeneratorContextManager.__exit__(self, typ, value, traceback)
    156     value = typ()
    157 try:
--> 158     self.gen.throw(typ, value, traceback)
    159 except StopIteration as exc:
    160     # Suppress StopIteration *unless* it's the same exception that
    161     # was passed to throw().  This prevents a StopIteration
    162     # raised inside the \"with\" statement from being suppressed.
    163     return exc is not value

File c:\\Users\\Sachin_Bhat\\Documents\\dev\\package\\.venv\\Lib\\site-packages\\httpx\\_transports\\default.py:86, in map_httpcore_exceptions()
     83     raise
     85 message = str(exc)
---> 86 raise mapped_exc(message) from exc

ConnectError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1006)"
}

Description

I just need the response as the output but instead even after passing the client I still see SSL verification issues. I had a look at langchain-openai and the ChatOpenAI defines two parameters. http_client and http_async_client apart from client and async_client:

client: Any = Field(default=None, exclude=True)  #: :meta private:
async_client: Any = Field(default=None, exclude=True)  #: :meta private:
http_client: Union[Any, None] = None
    """Optional httpx.Client. Only used for sync invocations. Must specify 
        http_async_client as well if you'd like a custom client for async invocations.
    """
http_async_client: Union[Any, None] = None
    """Optional httpx.AsyncClient. Only used for async invocations. Must specify 
        http_client as well if you'd like a custom client for sync invocations."""

System Info

System Information

OS: Windows OS Version: 10.0.22621 Python Version: 3.11.8 (main, Feb 25 2024, 03:41:44) [MSC v.1929 64 bit (AMD64)]

Package Information

langchain_core: 0.1.46 langchain: 0.1.16 langchain_community: 0.0.34 langsmith: 0.1.51 langchain_mistralai: 0.1.4 langchain_openai: 0.1.4 langchain_postgres: 0.0.3 langchain_text_splitters: 0.0.1 langchainhub: 0.1.15 langgraph: 0.0.32

Packages not installed (Not Necessarily a Problem)

The following packages were not found:

langserve

efriis commented 6 months ago

Should be fixed in langchain-mistralai==0.1.5!

note you'll have to manually construct your headers and that kind of thing in the client currently to mimic what the default client does. https://github.com/langchain-ai/langchain/blob/master/libs/partners/mistralai/langchain_mistralai/chat_models.py#L411-L431

efriis commented 6 months ago

21034 was the fix