langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.59k stars 15.52k forks source link

Object of type StreamingStdOutCallbackHandler is not JSON serializable #4070

Closed mrcaipeng closed 1 year ago

mrcaipeng commented 1 year ago

import os from langchain.llms import OpenAI, Anthropic from langchain.chat_models import ChatOpenAI from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

from langchain.schema import HumanMessage llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) resp = llm("Write me a song about sparkling water.")

when executed the above code, I got error "Object of type StreamingStdOutCallbackHandler is not JSON serializable".

did I make something wrong or other issue here?

tomatefarcie123 commented 1 year ago

I have the same problem with that callback and others. Would love to hear how to solve this.

dev2049 commented 1 year ago

hm not able to reproduce from that snippet. does the error occur at instantiation of Handler or when the llm is called?

tomatefarcie123 commented 1 year ago

Not sure but I think it's when the LLM is called. It appears that the Handler object is being passed to the OpenAI API for some reason.

vowelparrot commented 1 year ago

Is this a logger warning?

tomatefarcie123 commented 1 year ago

Sorry if I'm hogging @mrcaipeng 's ticket. In my case it appears in the debug console of a Flask app. I'm trying to stream to a web page.

mrcaipeng commented 1 year ago

My code is from https://python.langchain.com/en/latest/modules/models/llms/examples/streaming_llm.html. I didn't change anything. I download the ipynb file and excute in my local jupyter notebook. the version of langchain is 0.0.157. then , I saw the warning and error. the error log as below:

WARNING! callbacks is not default parameter. callbacks was transfered to model_kwargs. Please confirm that callbacks is what you intended.

TypeError Traceback (most recent call last) Cell In[14], line 3 1 llm = OpenAI(streaming=True, callbacks=[StreamingStdOutCallbackHandler()], temperature=0) 2 # llm = OpenAI(streaming=True, temperature=0) ----> 3 resp = llm("Write me a song about sparkling water.")

File /opt/miniconda3/lib/python3.9/site-packages/langchain/llms/base.py:246, in BaseLLM.call(self, prompt, stop) 244 def call(self, prompt: str, stop: Optional[List[str]] = None) -> str: 245 """Check Cache and run the LLM on the given prompt and input.""" --> 246 return self.generate([prompt], stop=stop).generations[0][0].text

File /opt/miniconda3/lib/python3.9/site-packages/langchain/llms/base.py:140, in BaseLLM.generate(self, prompts, stop) 138 except (KeyboardInterrupt, Exception) as e: 139 self.callback_manager.on_llm_error(e, verbose=self.verbose) --> 140 raise e 141 self.callback_manager.on_llm_end(output, verbose=self.verbose) 142 return output

File /opt/miniconda3/lib/python3.9/site-packages/langchain/llms/base.py:137, in BaseLLM.generate(self, prompts, stop) 133 self.callback_manager.on_llm_start( 134 {"name": self.class.name}, prompts, verbose=self.verbose 135 ) 136 try: --> 137 output = self._generate(prompts, stop=stop) 138 except (KeyboardInterrupt, Exception) as e: 139 self.callback_manager.on_llm_error(e, verbose=self.verbose)

File /opt/miniconda3/lib/python3.9/site-packages/langchain/llms/openai.py:282, in BaseOpenAI._generate(self, prompts, stop) 280 params["stream"] = True 281 response = _streaming_response_template() --> 282 for stream_resp in completion_with_retry( 283 self, prompt=_prompts, **params 284 ): 285 self.callback_manager.on_llm_new_token( 286 stream_resp["choices"][0]["text"], 287 verbose=self.verbose, 288 logprobs=stream_resp["choices"][0]["logprobs"], 289 ) 290 _update_response(response, stream_resp)

File /opt/miniconda3/lib/python3.9/site-packages/langchain/llms/openai.py:102, in completion_with_retry(llm, kwargs) 98 @retry_decorator 99 def _completion_with_retry(kwargs: Any) -> Any: 100 return llm.client.create(kwargs) --> 102 return _completion_with_retry(kwargs)

File /opt/miniconda3/lib/python3.9/site-packages/tenacity/init.py:289, in BaseRetrying.wraps..wrapped_f(*args, kw) 287 @functools.wraps(f) 288 def wrapped_f(*args: t.Any, *kw: t.Any) -> t.Any: --> 289 return self(f, args, kw)

File /opt/miniconda3/lib/python3.9/site-packages/tenacity/init.py:379, in Retrying.call(self, fn, *args, **kwargs) 377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 378 while True: --> 379 do = self.iter(retry_state=retry_state) 380 if isinstance(do, DoAttempt): 381 try:

File /opt/miniconda3/lib/python3.9/site-packages/tenacity/init.py:314, in BaseRetrying.iter(self, retry_state) 312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain) 313 if not (is_explicit_retry or self.retry(retry_state)): --> 314 return fut.result() 316 if self.after is not None: 317 self.after(retry_state)

File /opt/miniconda3/lib/python3.9/concurrent/futures/_base.py:439, in Future.result(self, timeout) 437 raise CancelledError() 438 elif self._state == FINISHED: --> 439 return self.__get_result() 441 self._condition.wait(timeout) 443 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:

File /opt/miniconda3/lib/python3.9/concurrent/futures/_base.py:391, in Future.__get_result(self) 389 if self._exception: 390 try: --> 391 raise self._exception 392 finally: 393 # Break a reference cycle with the exception in self._exception 394 self = None

File /opt/miniconda3/lib/python3.9/site-packages/tenacity/init.py:382, in Retrying.call(self, fn, *args, *kwargs) 380 if isinstance(do, DoAttempt): 381 try: --> 382 result = fn(args, **kwargs) 383 except BaseException: # noqa: B902 384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]

File /opt/miniconda3/lib/python3.9/site-packages/langchain/llms/openai.py:100, in completion_with_retry.._completion_with_retry(kwargs) 98 @retry_decorator 99 def _completion_with_retry(kwargs: Any) -> Any: --> 100 return llm.client.create(**kwargs)

File /opt/miniconda3/lib/python3.9/site-packages/openai/api_resources/completion.py:25, in Completion.create(cls, *args, *kwargs) 23 while True: 24 try: ---> 25 return super().create(args, **kwargs) 26 except TryAgain as e: 27 if timeout is not None and time.time() > start + timeout:

File /opt/miniconda3/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, params) 127 @classmethod 128 def create( 129 cls, (...) 136 params, 137 ): 138 ( 139 deployment_id, 140 engine, (...) 150 api_key, api_base, api_type, apiversion, organization, **params 151 ) --> 153 response, , api_key = requestor.request( 154 "post", 155 url, 156 params=params, 157 headers=headers, 158 stream=stream, 159 request_id=request_id, 160 request_timeout=request_timeout, 161 ) 163 if stream: 164 # must be an iterator 165 assert not isinstance(response, OpenAIResponse)

File /opt/miniconda3/lib/python3.9/site-packages/openai/api_requestor.py:216, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout) 205 def request( 206 self, 207 method, (...) 214 request_timeout: Optional[Union[float, Tuple[float, float]]] = None, 215 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]: --> 216 result = self.request_raw( 217 method.lower(), 218 url, 219 params=params, 220 supplied_headers=headers, 221 files=files, 222 stream=stream, 223 request_id=request_id, 224 request_timeout=request_timeout, 225 ) 226 resp, got_stream = self._interpret_response(result, stream) 227 return resp, got_stream, self.api_key

File /opt/miniconda3/lib/python3.9/site-packages/openai/api_requestor.py:509, in APIRequestor.request_raw(self, method, url, params, supplied_headers, files, stream, request_id, request_timeout) 497 def request_raw( 498 self, 499 method, (...) 507 request_timeout: Optional[Union[float, Tuple[float, float]]] = None, 508 ) -> requests.Response: --> 509 abs_url, headers, data = self._prepare_request_raw( 510 url, supplied_headers, method, params, files, request_id 511 ) 513 if not hasattr(_thread_context, "session"): 514 _thread_context.session = _make_session()

File /opt/miniconda3/lib/python3.9/site-packages/openai/api_requestor.py:481, in APIRequestor._prepare_request_raw(self, url, supplied_headers, method, params, files, request_id) 479 data = params 480 if params and not files: --> 481 data = json.dumps(params).encode() 482 headers["Content-Type"] = "application/json" 483 else:

File /opt/miniconda3/lib/python3.9/json/init.py:231, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw) 226 # cached encoder 227 if (not skipkeys and ensure_ascii and 228 check_circular and allow_nan and 229 cls is None and indent is None and separators is None and 230 default is None and not sort_keys and not kw): --> 231 return _default_encoder.encode(obj) 232 if cls is None: 233 cls = JSONEncoder

File /opt/miniconda3/lib/python3.9/json/encoder.py:199, in JSONEncoder.encode(self, o) 195 return encode_basestring(o) 196 # This doesn't pass the iterator directly to ''.join() because the 197 # exceptions aren't as detailed. The list call should be roughly 198 # equivalent to the PySequence_Fast that ''.join() would do. --> 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): 201 chunks = list(chunks)

File /opt/miniconda3/lib/python3.9/json/encoder.py:257, in JSONEncoder.iterencode(self, o, _one_shot) 252 else: 253 _iterencode = _make_iterencode( 254 markers, self.default, _encoder, self.indent, floatstr, 255 self.key_separator, self.item_separator, self.sort_keys, 256 self.skipkeys, _one_shot) --> 257 return _iterencode(o, 0)

File /opt/miniconda3/lib/python3.9/json/encoder.py:179, in JSONEncoder.default(self, o) 160 def default(self, o): 161 """Implement this method in a subclass such that it returns 162 a serializable object for o, or calls the base implementation 163 (to raise a TypeError). (...) 177 178 """ --> 179 raise TypeError(f'Object of type {o.class.name} ' 180 f'is not JSON serializable')

TypeError: Object of type StreamingStdOutCallbackHandler is not JSON serializable

nullne commented 1 year ago

@mrcaipeng @tomatefarcie123 try upgrade the python3 to 3.11 which works for me . FYI, langchain version: 0.0.161

gfranxman commented 1 year ago

I'm using python 3.11.1 and get the same error.

             ^^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/langchain/chat_models/openai.py", line 226, in _completion_with_retry
    return self.client.create(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 216, in request
    result = self.request_raw(
             ^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 509, in request_raw
    abs_url, headers, data = self._prepare_request_raw(
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/openai/api_requestor.py", line 481, in _prepare_request_raw
    data = json.dumps(params).encode()
           ^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/.pyenv/versions/3.11.1/lib/python3.11/json/__init__.py", line 231, in dumps
    return _default_encoder.encode(obj)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/.pyenv/versions/3.11.1/lib/python3.11/json/encoder.py", line 200, in encode
    chunks = self.iterencode(o, _one_shot=True)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/.pyenv/versions/3.11.1/lib/python3.11/json/encoder.py", line 258, in iterencode
    return _iterencode(o, 0)
           ^^^^^^^^^^^^^^^^^
  File "/Users/gfranxman/.pyenv/versions/3.11.1/lib/python3.11/json/encoder.py", line 180, in default
    raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type StreamingStdOutCallbackHandler is not JSON serializable
 1 î‚°
 12:20:16  llm_minimodel  ~  Projects  LargeLanguageModels  llm_minimodel  main  ✎  16? 
 % î‚° which python
/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/bin/python

 12:22:26  llm_minimodel  ~  Projects  LargeLanguageModels  llm_minimodel  main  ✎  16? 
 % î‚° python --version
Python 3.11.1

I tried setting streaming as True, but not setting the callbacks until after it has been initialized up like so:

    chat_openai_model_kwargs = dict(
        temperature=temperature, model_name=model_name, max_tokens=num_outputs
    )
    if streaming:
        chat_openai_model_kwargs["streaming"] = True

    llm_predictor = LLMPredictor(
        llm=ChatOpenAI(**chat_openai_model_kwargs)
    )

    if streaming:
        for handler in [StreamingStdOutCallbackHandler()]:
            llm_predictor.llm.callback_manager.add_handler(handler)

This appears to get thing going.

tomatefarcie123 commented 1 year ago

Did you update to the latest version of langchain?

gfranxman commented 1 year ago

Extra note: StreamingStdOutCallbackHandler emits each token to stdout as the response is being composed. It seems to generate multiple responses which are refined. I tried subclassing it and found .on_text() was never called, but .on_llm_new_token() is called and is responsible for the output you see. I was hoping to use this to send back the tokens to the client. You can override .on_llm_end() to get each draft as they complete.

gfranxman commented 1 year ago

Did you update to the latest version of langchain?

Oops, no. I just did, which got me to 0.0.162, but then it broke on

  ...
  File "/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/llama_index/llm_predictor/base.py", line 12, in <module>
    from langchain.schema import BaseLanguageModel
ImportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/Users/gfranxman/Projects/LargeLanguageModels/llm_minimodel/.venv/lib/python3.11/site-packages/langchain/schema.py)

Since llama_index is unhappy, I upgraded it next, which led to llm-minimodel 0.1.0 requires llama-index<0.6.0,>=0.5.13.post1, but you have llama-index 0.6.2 which is incompatible. So I think I'm going to be in dependency hell for while, but that's my problem.

SajalSinha commented 1 year ago

Hi, any updates on how to resolve it? I'm too getting this issue.

image

tomatefarcie123 commented 1 year ago

I placed my (custom) callback handler in the llm declaration, upgraded langchain and that finally got rid of the problem:

    llm=ChatOpenAI(
        openai_api_key=env_variables["OPENAI_API_KEY"],
        streaming=True,
        temperature=0,
        model_name='gpt-3.5-turbo',
        max_tokens=256,
        callbacks=[handler] 
    )
Julian-Cao commented 1 year ago

why was it closed?The issue remains to be here now

Julian-Cao commented 1 year ago

the official demo code snippets: https://python.langchain.com/en/latest/modules/models/chat/examples/streaming.html langchain verison==0.0.176 just run it locally, raise the type error:TypeError: Object of type StreamingStdOutCallbackHandler is not JSON serializable

vowelparrot commented 1 year ago

@jianglanCao what python version are you using? I can't reproduce this on the example you've linked

Julian-Cao commented 1 year ago

@jianglanCao what python version are you using? I can't reproduce this on the example you've linked

After upgrading my langchain version to the latest version, it disappeared.

vowelparrot commented 1 year ago

Seems to be resolved

AlefAlfa commented 1 year ago

It seems to be a Jupyter Notebook/Lab problem. The code runs in VS Code without any problems

MuhammadSafwan456 commented 1 year ago

@mrcaipeng I was facing the same issue. I resolved it like this

llm = OpenAI(streaming=True, temperature=0) llm.callback_manager = StreamingStdOutCallbackHandler()

instead of passing callbacks in Constructor I assigned it explicitly and some how it is working fine for me