Open Linuxuser1234 opened 1 year ago
Hello! It looks like you don't set the vicuna weight. You can set it following the instruction in Readme. Thanks!
Sorry to bother you, I have encountered the same problem as you and have set up various files according to the process. May I ask if your problem has been resolved now
I did everything listed on the page I think? Maybe I didn't set up the weights...
I don't think I'm even as far as this.
L:\minigpt\MiniGPT-4>python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
Traceback (most recent call last):
File "L:\minigpt\MiniGPT-4\demo.py", line 10, in
Do I need to set the path different or something...
我认为我做了页面上列出的所有内容?也许我没有设置重量...
我认为我什至没有这么远。 L:\minigpt\MiniGPT-4>python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0 回溯(最近最后一次调用):文件 “L:\minigpt\MiniGPT-4\demo.py”,第 10 行,来自 minigpt4.common.config 导入配置文件“L:\minigpt\MiniGPT-4\minigpt4init.py”,第 11 行,来自 omegaconf import OmegaConf ModuleNotFoundError:没有名为“omegaconf”的模块
我需要设置不同的路径吗?
When I met these problems just like "NO Module name...", I just used pip install the module which it needed.
Hmm,
pip install git+https://github.com/lm-sys/FastChat.git@v0.1.10 ?
i think i did that...is there another one I have to do?
Maybe I did it in a different directory. Nope, I ran it in the same directory I'm trying to launch from...just got a lot of "requirements met"....
Oh so you mean pip install omegaconf?
Hmm,
pip install git+https://github.com/lm-sys/FastChat.git@v0.1.10 ?
i think i did that...is there another one I have to do?
emm, I just excute pip install xxx when this kind of bug appeared, It means that add the module. Don't afraid of this, if anything run error, just uninstall them.
Ok I did that...now I'm at. python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
Initializing Chat
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ L:\minigpt\MiniGPT-4\demo.py:55 in
This is the problem I just met, ctrl and click the address into this file, change the address like :L:/minigpt/MiniGPT-4/minigpt4/configs/models/minigpt4.yaml. Anyway, may be this path you should check twice.
I was watching a video that said it won't work on windows...is that true? is that the issue we're facing? It seems like others have gotten it to work though?
What do you mean I should check the path twice? Edit minigpt4.yaml with its own path?
I have it set to the model location..? L:\minigpt\MiniGPT-4\vicuna-13b-delta-v0
I think,,, we can.
I was watching a video that said it won't work on windows...is that true? is that the issue we're facing? It seems like others have gotten it to work though?
I think we can make it work. Nothing! My fault. this just right.
I think,,, we can.
I was watching a video that said it won't work on windows...is that true? is that the issue we're facing? It seems like others have gotten it to work though?
I think we can make it work. Nothing! My fault. this just right.
if you success, could you please help me solve some problem? In #97
在16行只需要输入vicuna-7b-delta-v1.1即可
trying to run minigpt-4 demo.py but when i run >python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0 i get this error
(minigpt4) C:\Users\---\MiniGPT-4>python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0 Initializing Chat Loading VIT 100%|█████████████████████████████████████████████████████████████████████████████| 1.89G/1.89G [02:10<00:00, 15.5MB/s] Loading VIT Done Loading Q-Former 100%|███████████████████████████████████████████████████████████████████████████████| 413M/413M [00:31<00:00, 13.8MB/s] Loading Q-Former Done Loading LLAMA Traceback (most recent call last): File "C:\Users\----\MiniGPT-4\demo.py", line 60, in <module> model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id)) File "C:\Users\---\MiniGPT-4\minigpt4\models\mini_gpt4.py", line 243, in from_config model = cls( File "C:\Users\---\MiniGPT-4\minigpt4\models\mini_gpt4.py", line 86, in __init__ self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model, use_fast=False) File "D:\Anaconda\envs\minigpt4\lib\site-packages\transformers\tokenization_utils_base.py", line 1770, in from_pretrained resolved_vocab_files[file_id] = cached_file( File "D:\Anaconda\envs\minigpt4\lib\site-packages\transformers\utils\hub.py", line 409, in cached_file resolved_file = hf_hub_download( File "D:\Anaconda\envs\minigpt4\lib\site-packages\huggingface_hub\utils\_validators.py", line 112, in _inner_fn validate_repo_id(arg_value) File "D:\Anaconda\envs\minigpt4\lib\site-packages\huggingface_hub\utils\_validators.py", line 160, in validate_repo_id raise HFValidationError( huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/path/to/vicuna/weights/'. Use
repo_typeargument if needed.