Closed BigJoon closed 8 months ago
Sorry, we haven't tested the local demo yet. This error occurs if llavaPhi is loaded in fp32 but the image tensor is encoded in fp16. You can modify this code to ensure the same dtype.
Hope this helps you!
@JLM-Z oh!! you are right!
That error disappear but another one just rised up. When I put in an image and say "hi", it comes out like this...
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/gradio/routes.py", line 437, in run_predict
output = await app.get_blocks().process_api(
File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1352, in process_api
result = await self.call_function(
File "/opt/conda/lib/python3.10/site-packages/gradio/blocks.py", line 1093, in call_function
prediction = await utils.async_iteration(iterator)
File "/opt/conda/lib/python3.10/site-packages/gradio/utils.py", line 341, in async_iteration
return await iterator.__anext__()
File "/opt/conda/lib/python3.10/site-packages/gradio/utils.py", line 334, in __anext__
return await anyio.to_thread.run_sync(
File "/opt/conda/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "/opt/conda/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/opt/conda/lib/python3.10/site-packages/gradio/utils.py", line 317, in run_sync_iterator_async
return next(iterator)
File "/workspace/workspace/llava-phi/llava_phi/serve/app.py", line 209, in http_bot
for chunk in output:
File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 35, in generator_context
response = gen.send(None)
File "/workspace/workspace/llava-phi/llava_phi/serve/app.py", line 160, in get_response
for new_text in streamer:
File "/opt/conda/lib/python3.10/site-packages/transformers/generation/streamers.py", line 223, in __next__
value = self.text_queue.get(timeout=self.timeout)
File "/opt/conda/lib/python3.10/queue.py", line 179, in get
raise Empty
_queue.Empty
Sorry, as there is no time to test the local demo at the moment, you can use the following script to chat about images without the need of Gradio interface:
python -m llava_phi.serve.cli \
--model-path /path/to/checkpoints/llava \
--image-file ./images/03-Confusing-Pictures.jpg \
--conv-mode "phi-2_v0"
thank you. Thanks to you, I think all the problems have been resolved. I'll close the issue!
I put the sample into app.py written with gradio in your repo and ran it, but an error occurred. Can you recognize this error?
The model used this time was llavaPhi, which you uploaded on huggingface...