THUDM / VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Apache License 2.0
4.1k stars 417 forks source link

PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x00000215425A1FD0> #156

Open nijisakai opened 1 year ago

nijisakai commented 1 year ago
(GLM) C:\Users\niji\VisualGLM-6B>python api.py
[2023-06-29 22:48:00,989] [INFO] DeepSpeed/CUDA is not installed, fallback to Pytorch checkpointing.
[2023-06-29 22:48:01,226] [WARNING] DeepSpeed Not Installed, you cannot import training_main from sat now.
[2023-06-29 22:48:01,922] [INFO] building VisualGLMModel model ...
[W C:\cb\pytorch_1000000000000\work\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [kubernetes.docker.internal]:11952 (system error: 10049 - 在其上下文中,该请求的地址无效。).
[W C:\cb\pytorch_1000000000000\work\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [kubernetes.docker.internal]:11952 (system error: 10049 - 在其上下文中,该请求的地址无效。).
[2023-06-29 22:48:01,992] [INFO] [RANK 0] > initializing model parallel with size 1
[2023-06-29 22:48:01,997] [INFO] [RANK 0] You are using model-only mode.
For torch.distributed users or loading model parallel models, set environment variables RANK, WORLD_SIZE and LOCAL_RANK.
C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\torch\nn\init.py:405: UserWarning: Initializing zero-element tensors is a no-op
  warnings.warn("Initializing zero-element tensors is a no-op")
[2023-06-29 22:48:10,818] [INFO] [RANK 0]  > number of parameters on model parallel rank 0: 7810582016
[2023-06-29 22:48:16,418] [INFO] [RANK 0] global rank 0 is loading checkpoint C:\Users\niji/.sat_models\visualglm-6b\1\mp_rank_00_model_states.pt
[2023-06-29 22:48:46,067] [INFO] [RANK 0] > successfully loaded C:\Users\niji/.sat_models\visualglm-6b\1\mp_rank_00_model_states.pt
INFO:     Started server process [24864]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
Start to process request
INFO:     127.0.0.1:12073 - "POST / HTTP/1.1" 500 Internal Server Error
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 428, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in __call__
    return await self.app(scope, receive, send)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\fastapi\applications.py", line 284, in __call__
    await super().__call__(scope, receive, send)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\applications.py", line 122, in __call__
    await self.middleware_stack(scope, receive, send)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
    raise exc
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
    await self.app(scope, receive, _send)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
    raise exc
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
    await self.app(scope, receive, sender)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
    raise e
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
    await self.app(scope, receive, send)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\routing.py", line 718, in __call__
    await route.handle(scope, receive, send)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\routing.py", line 276, in handle
    await self.app(scope, receive, send)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\starlette\routing.py", line 66, in app
    response = await func(request)
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\fastapi\routing.py", line 241, in app
    raw_response = await run_endpoint_function(
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\fastapi\routing.py", line 167, in run_endpoint_function
    return await dependant.call(**values)
  File "C:\Users\niji\VisualGLM-6B\api.py", line 32, in visual_glm
    input_data = generate_input(input_text, input_image_encoded, history, input_para)
  File "C:\Users\niji\VisualGLM-6B\model\infer_util.py", line 40, in generate_input
    image = Image.open(BytesIO(decoded_image))
  File "C:\Users\niji\anaconda3\envs\GLM\lib\site-packages\PIL\Image.py", line 3283, in open
    raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x00000215425A1FD0>
issouker97 commented 4 months ago

hey, how did you solve the problem, i have the same problem

nijisakai commented 4 months ago

hey, how did you solve the problem, i have the same problem

i think the way is downgrade pillow version