Open krishnaadithya opened 1 year ago
Same here.
I managed to display the chat area by digging into the html code (by removing the "hidden" and "hide" from this area) but the chat is still unusable.
I managed to display the chat area by digging into the html code (by removing the "hidden" and "hide" from this area) but the chat is still unusable.
Hi, I am facing difficulties in reaching till step. can you please share your code in a colab notebook, about the steps performed, that would be a great help. thank you very much.
I have the same problem, and the error message is
future: <Task finished name='jp57kt3ao7q_11' coro=<Queue.process_events() done, defined at /home/clh/miniconda3/envs/LLM/lib/python3.10/site-packages/gradio/queueing.py:343> exception=1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'fn_index': 11, 'data': ...on_hash': 'jp57kt3ao7q'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.3/v/missing>
...
pydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody
event_id
Field required [type=missing, input_value={'fn_index': 11, 'data': ...on_hash': 'jp57kt3ao7q'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.3/v/missing
It seems like some problem of pydantic, and there is a related post https://github.com/QwenLM/Qwen/issues/417.
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
@imdoublecats 你好兄弟,请问pmc数据集(llava_med_image_urls.jsonl)你是按照作者的方法下载的吗?我下载非常慢,请问可以回复一下吗?
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
@krishnaadithya @atultiwari @cesarandreslopez @caolonghao Hello, did you download the pmc data set (llava_med_image_urls.jsonl) according to the author's method? My download is very slow, can you reply to me?
@imdoublecats How did you import gradient_offline? Is its use the same as gradient? What adjustments need to be made? I would be very grateful if you could answer
@imdoublecats How did you import gradient_offline? Is its use the same as gradient? What adjustments need to be made? I would be very grateful if you could answer i installed it using pip, if my memory is right
@imdoublecats 你好兄弟,请问pmc数据集(llava_med_image_urls.jsonl)你是按照作者的方法下载的吗?我下载非常慢,请问可以回复一下吗?
只测了输出结果对不对,pmc我没有下载
@imdoublecats 兄弟可以加个好友吗?我测试输出一直报错,这一周都没有解决哈哈
@imdoublecats 兄弟可以加个好友吗?我测试输出一直报错,这一周都没有解决哈哈
没事你就留言就好,我不忙时候会回的
@yihp 我正常运行了,就是用普通的gradio就好,留个邮箱我把导出的environment.yml发你?pytorch直接装最新的就好,transformers照着install里面装或者直接装好像都行。然后gui的话,gradio_web_server.py
里面把有些 visible
改成 True
就好。
@caolonghao 兄弟可以发一下你的environment.yml吗?我的邮箱是2501512466@qq.com,非常感谢!
@caolonghao 你好兄弟,请问你用的那个llama权重啊,然后你除了web_gradio代码有改动外,其他的有变动吗?我一直报这个错误,换了transformer版本都不行。
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.58s/it]
/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:1411: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
warnings.warn(
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/app/LLaVA-Med/llava/eval/run_llava.py", line 147, in inf
, nan
or element < 0
@imdoublecats 你好兄弟,请问你使用gradio_offline修改了代码吗?我看官网说和gradio_offline和gradio使用方法一样,但是我用就会报错
@imdoublecats 你好兄弟,请问你使用gradio_offline修改了代码吗?我看官网说和gradio_offline和gradio使用方法一样,但是我用就会报错
- 我用gradio_offline是因为我的网络环境不好有些url连不上导致部分网页组件加载失败,这个不是必须的,他们的用法我没做什么区分
- 把你的错误描述清楚,至少把报错贴一下
@imdoublecats
好的,我的原始llama权重使用的Llama-2-7b-chat-hf
运行环境:cuda11.7,v1004,torch 2.0.0+cu117,transformers ,Python 3.10.0
我的问题是在我运行推理示例llava.eval.run_llava时报错,具体命令为:
python -m llava.eval.run_llava \
--model-name /app/LLaVA-Med/checkpoints/LLaVA-Med-7B \
--image-file "/app/LLaVA-Med/llava/serve/examples/bio_patch.png" \
--query "What is this image about?"
报错信息为:
root@976230b8c90c:/app/LLaVA-Med# bash scripts/chunyl/run_llava.sh
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. If you see this, DO NOT PANIC! This is expected, and simply means that the legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=True
. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:11<00:00, 5.58s/it]
/usr/local/lib/python3.10/site-packages/transformers/generation/utils.py:1411: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
warnings.warn(
Traceback (most recent call last):
File "/usr/local/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/app/LLaVA-Med/llava/eval/run_llava.py", line 147, in inf
, nan
or element < 0
@yihp 我写的方法是网页端启动demo时候,要用gradio来配置网页组件,你这个用的是直接推理的脚本,看起来不是和我一样使用方式 报错是tenor数值检查的时候有值不符合,需要检查你的输入和网络运行时候对应行的tensor数值错误产生的原因,看起来不是环境的问题
@imdoublecats 我测试网页端,发现llava.serve.test_message 可以正常运行,但是用web端的示例就会有问题:
1.gradio_web_server
2024-01-20 14:05:45 | INFO | gradio_web_server | ==== request ====
{'model': 'LLaVA-Med-7B', 'prompt': 'You are LLaVA-Med, a large language and vision assistant trained by a group of researchers at Microsoft, based on the general domain LLaVA architecture.You are able to understand the visual content that the user provides, and assist the user with a variety of medical and clinical tasks using natural language.Follow the instructions carefully and explain your answers in detail.###Human: Hi!###Assistant: Hi there! How can I help you today?\n###Human: What is unusual about this image?\ninf
, nan
or element < 0
追溯到model_worker还是之前的报错
@imdoublecats 然后我将Temperature设置为1,Max output tokens从默认的512设置为256,就有下面所示的幻觉回答
@yihp 看起来是同样的错误类型,我试了llava和llava-med的权重都是能正常输出的,这个错误我不记得我遇到过,你可能需要具体查一下导致这个错误的原因了,从这个报错中我猜不到错误的原因还。 出现乱码可能是模型或者是token的权重问题吧,其他模型你试过吗,也是这个情况吗,比如作者提供的llava-med模型
@imdoublecats 我刚刚测试LLaVA-Med VQA-RAD-finetuned模型,有回答了,但是响应有问题 请问您是使用的哪个模型啊?这四个都是最终的llava-med模型吗
@yihp我用的第一个,乱码肯定是不对的。vqa这个我试过了好像回答不太正常,具体怎么回事忘了,第一个和他预训练用的lora是正常的, 我试过 他这个是delta权重,要先和原始的llama模型合并才能用,如果你的原始llama模型不是从llama官方途径申请的话,而是从huggingface上下载,那个模型可能不是原始llama模型,也可能会导致这些问题
@imdoublecats 好的感谢你!我下载第一个测试一下,用这里的权重的话还需要和llama权重合并吗 python3 -m llava.model.apply_delta \ --base /path/to/llama-7b \ --target /output/path/to/llava_med_model \ --delta /path/to/llava_med_delta_weights
@yihp base那个model,要使用官方途径的llama模型,或者保证你下载的模型和官方的llama模型完全相同。llama模型因为证书问题,一般开发者是不提供合并的模型权重的。
@imdoublecats 噢原来如此,我官网申请没过,然后用镜像网站下载的llama2,可能问题就出在这里
@imdoublecats 您好大佬,非常感谢您之前的回答,请问可以加您个好友吗?想咨询一点点问题,我的q2501512466或者vx:y15215200276,抱歉打扰了!
@yihp 我正常运行了,就是用普通的gradio就好,留个邮箱我把导出的environment.yml发你?pytorch直接装最新的就好,transformers照着install里面装或者直接装好像都行。然后gui的话,
gradio_web_server.py
里面把有些visible
改成True
就好。
你好,请问可以发一份依赖嘛,274429079@qq.om,谢谢
这是来自QQ邮箱的假期自动回复邮件。 您好,我最近正在休假中,无法亲自回复您的邮件。我将在假期结束后,尽快给您回复。
After running all the necessary commands, the demo doesn't display the chat interface as expected. It appears to be stuck, and the chat UI is not visible despite the code containing the relevant implementation.
Not sure what I am doing wrong.