camenduru / Qwen2-VL-jupyter

6 stars 0 forks source link

[Colab error] pydantic arbitrary_types_allowed=True error #7 #3

Open mathigatti opened 1 week ago

mathigatti commented 1 week ago

I got this error, it was fixed by updating gradio though, I'm sharing it in case it's useful for someone

Fix command: pip install -U gradio==4.43.0

Error message

    return self._unknown_type_schema(obj)
  File "/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_generate_schema.py", line 415, in _unknown_type_schema
    raise PydanticSchemaGenerationError(
pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'starlette.requests.Request'>. Set arbitrary_types_allowed=True in the model_config to ignore this error or implement __get_pydantic_core_schema__ on your type to fully support it.

If you got this error by calling handler(<some type>) within __get_pydantic_core_schema__ then you likely need to call handler.generate_schema(<some type>) since we do not call __get_pydantic_core_schema__ on <some type> otherwise to avoid infinite recursion.

For further information visit https://errors.pydantic.dev/2.8/u/schema-for-unknown-type
dieu commented 1 week ago

@mathigatti Thanks for the tip!

Did you able to run 2B in free T4? I got an error:

torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 34.94 GiB. GPU 0 has a total capacity of 14.75 GiB of which 5.13 GiB is free. Process 17472 has 9.61 GiB memory in use. Of the allocated memory 7.19 GiB is allocated by PyTorch, and 2.29 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)