Open gqsmmz opened 5 months ago
Which version of LLaMA you use? Maybe it’s because you use llama2 weight but llama1 vicuna to merge?
Which version of LLaMA you use? Maybe it’s because you use llama2 weight but llama1 vicuna to merge?
the llama_model I used is "llama-2-7b-chat-hf", the content as follows. I don't get it from Official website, but from other approach, seeming to be correct. "llama_proj_model" and "ckpt" are yours provided.
llama2 should be use directly without adding vicuna
llama2 should be use directly without adding vicuna
I directly use the "llama-2-7b-chat-hf" to replace the llama_model's path like this, not using apply_delta.py.
llama2 should be use directly without adding vicuna
Or do you mean that the code inside here has added Vicuna?
llama2可以直接使用,在yaml文件里修改路径就好了,不需要额外与Vicuna合并。
llama2可以直接使用,在yaml文件里修改路径就好了,不需要额外与Vicuna合并。
好的!是直接使用的,想知道使用llama2的输出就是上面输出的那样嘛?~
正常情况下不是的: ),有试过其他视频吗?
正常情况下不是的: ),有试过其他视频吗?
有试过其他视频,如果视频是在用打蛋器搅鸡蛋等,把text-query换成"describe the video"他的输出是"describe the video using different tools"(我记忆中是这样的)。
就是输出都是text-query+视频内容的结合体,或者只有text-query的内容。我是让他describe,但返回的答案是,describe the video ....。相当于不是回答。
如果把text-query换成"what is he doing",就会变成上面的
我的问题是,怎样根据你们的demo得出你们的结果~
可能你使用的llama2有问题,demo video作为input的时候,无论是llama1还是llama2都不会出现你说的情况
好的好的,我先试一下其他的模型。
---- 回复的原邮件 ---- | 发件人 | @.> | | 日期 | 2024年01月08日 17:34 | | 收件人 | @.> | | 抄送至 | @.>@.> | | 主题 | Re: [rese1f/MovieChat] how to correctly run the demo (Issue #37) |
可能你使用的llama2有问题,demo video作为input的时候,无论是llama1还是llama2都不会出现你说的情况
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
是否是因为,使用llama为llama_model时,这里的ckpt不应该采用finetune-vicuna7b-v2.pth呢?
好的,我暂时解决了,把llama换成vicuna的,要跟ckpt对上
谢谢作者的耐心解答!Thanks!
好的,不谢(ง •_•)ง
想请问下,你们的模型是不是无法识别《我的世界》的游戏视频?
For videos of Minecraft, try to use https://github.com/rese1f/STEVE
您好,请问你们能提供llama的ckpt文件嘛~
这样就可以把llama_model换成llama或者基于minecraft微调的llama语言模型,使用。
现在是只能使用vicuna的那个ckpt文件~~~
---- Replied Message ---- | From | @.> | | Date | 01/09/2024 14:26 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [rese1f/MovieChat] how to correctly run the demo (Issue #37) |
For videos of Minecraft, try to use https://github.com/rese1f/STEVE
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
抱歉我无法直接提供llama的ckpt文件,可以参考这个问题下的解决方法:https://github.com/rese1f/MovieChat/issues/36
抱歉我无法直接提供llama的ckpt文件,可以参考这个问题下的解决方法:#36
ohsorry!!! I say the ckpt in this picture called "VL_LLaMA_2_7B_Fintuned.pth", not the official llama ckpt. Thanks for your time!~
u can access them via VideoLLaMA
u can access them via VideoLLaMA
so kind of u, I've found it~ appreciating u so much!!
是否是因为,使用llama为llama_model时,这里的ckpt不应该采用finetune-vicuna7b-v2.pth呢?
好的,我暂时解决了,把llama换成vicuna的,要跟ckpt对上
谢谢作者的耐心解答!Thanks!
你好请问这里llama换成vicuna的是什么意思?用llama2的话最终要用哪个模型?
I want to use the demo to get some answers for a video. Unfortunately, the output I got using your demo seem not to be the answers for the input text-query as follows(the last line is the output)