Closed WangRongsheng closed 1 year ago
wow, you are the best!
It is good!
่ฟ่ก!python demo.py --cfg-path eval_configs/minigpt4_eval.yaml
ๅบ้
Initializing Chat
Downloading (โฆ)solve/main/vocab.txt: 100% 232k/232k [00:00<00:00, 8.88MB/s]
Downloading (โฆ)okenizer_config.json: 100% 28.0/28.0 [00:00<00:00, 4.18kB/s]
Downloading (โฆ)lve/main/config.json: 100% 570/570 [00:00<00:00, 225kB/s]
Loading VIT
100% 1.89G/1.89G [00:11<00:00, 182MB/s]
Loading VIT Done
Loading Q-Former
100% 413M/413M [00:02<00:00, 187MB/s]
Loading Q-Former Done
Loading LLAMA
โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
โ /content/MiniGPT-4/demo.py:60 in <module> โ
โ โ
โ 57 model_config = cfg.model_cfg โ
โ 58 model_config.device_8bit = args.gpu_id โ
โ 59 model_cls = registry.get_model_class(model_config.arch) โ
โ โฑ 60 model = model_cls.from_config(model_config).to('cuda:{}'.format(args.g โ
โ 61 โ
โ 62 vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train โ
โ 63 vis_processor = registry.get_processor_class(vis_processor_cfg.name).f โ
โ โ
โ /content/MiniGPT-4/minigpt4/models/mini_gpt4.py:243 in from_config โ
โ โ
โ 240 โ โ max_txt_len = cfg.get("max_txt_len", 32) โ
โ 241 โ โ end_sym = cfg.get("end_sym", '\n') โ
โ 242 โ โ โ
โ โฑ 243 โ โ model = cls( โ
โ 244 โ โ โ vit_model=vit_model, โ
โ 245 โ โ โ q_former_model=q_former_model, โ
โ 246 โ โ โ img_size=img_size, โ
โ โ
โ /content/MiniGPT-4/minigpt4/models/mini_gpt4.py:86 in __init__ โ
โ โ
โ 83 โ โ print('Loading Q-Former Done') โ
โ 84 โ โ โ
โ 85 โ โ print('Loading LLAMA') โ
โ โฑ 86 โ โ self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_mo โ
โ 87 โ โ self.llama_tokenizer.pad_token = self.llama_tokenizer.eos_toke โ
โ 88 โ โ โ
โ 89 โ โ if self.low_resource: โ
โ โ
โ /usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils_base. โ
โ py:1771 in from_pretrained โ
โ โ
โ 1768 โ โ โ โ elif is_remote_url(file_path): โ
โ 1769 โ โ โ โ โ resolved_vocab_files[file_id] = download_url(file โ
โ 1770 โ โ โ else: โ
โ โฑ 1771 โ โ โ โ resolved_vocab_files[file_id] = cached_file( โ
โ 1772 โ โ โ โ โ pretrained_model_name_or_path, โ
โ 1773 โ โ โ โ โ file_path, โ
โ 1774 โ โ โ โ โ cache_dir=cache_dir, โ
โ โ
โ /usr/local/lib/python3.9/dist-packages/transformers/utils/hub.py:409 in โ
โ cached_file โ
โ โ
โ 406 โ user_agent = http_user_agent(user_agent) โ
โ 407 โ try: โ
โ 408 โ โ # Load from URL or cache if already cached โ
โ โฑ 409 โ โ resolved_file = hf_hub_download( โ
โ 410 โ โ โ path_or_repo_id, โ
โ 411 โ โ โ filename, โ
โ 412 โ โ โ subfolder=None if len(subfolder) == 0 else subfolder, โ
โ โ
โ /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py: โ
โ 112 in _inner_fn โ
โ โ
โ 109 โ โ โ kwargs.items(), # Kwargs values โ
โ 110 โ โ ): โ
โ 111 โ โ โ if arg_name in ["repo_id", "from_id", "to_id"]: โ
โ โฑ 112 โ โ โ โ validate_repo_id(arg_value) โ
โ 113 โ โ โ โ
โ 114 โ โ โ elif arg_name == "token" and arg_value is not None: โ
โ 115 โ โ โ โ has_token = True โ
โ โ
โ /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py: โ
โ 160 in validate_repo_id โ
โ โ
โ 157 โ โ raise HFValidationError(f"Repo id must be a string, not {type( โ
โ 158 โ โ
โ 159 โ if repo_id.count("/") > 1: โ
โ โฑ 160 โ โ raise HFValidationError( โ
โ 161 โ โ โ "Repo id must be in the form 'repo_name' or 'namespace/rep โ
โ 162 โ โ โ f" '{repo_id}'. Use `repo_type` argument if needed." โ
โ 163 โ โ ) โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
HFValidationError: Repo id must be in the form 'repo_name' or
'namespace/repo_name': '/path/to/vicuna/weights/'. Use `repo_type` argument if
needed.
@XuNing2 Set llama_model: "wangrongsheng/MiniGPT-4-LLaMA" in minigpt4/configs/models/minigpt4.yaml
Loading checkpoint shards: 0% 0/3 [00:00<?, ?it/s] ๅกๅจ่ฟ้ๅฐฑ็ปๆไบ...
@sanjikk If you want to use miniGPT-4 in Google Colab, You must use GPU and you are a Google Colab Pro user, Otherwise you will not be able to use colab!
@WangRongsheng In fact, I am Pro user and use GPU.. Finally I find that I should choose High level GPU. Thanks
I am a Pro user and I have used the A100. But I get a "UnpicklingError: invalid load key, '<'."
Prompt Example
Load BLIP2-LLM Checkpoint: pretrained_minigpt4.pth
โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
โ /content/MiniGPT-4/demo.py:60 in
@ChristianAchenbach4815 Pleck check:
@TsuTikgiau Hi, I update MiniGPT-4 7B in Google Colab notebook, you can enjoy it!
hi, after setting llama_model: "wangrongsheng/MiniGPT-4-LLaMA" in minigpt4/configs/models/minigpt4.yaml, which llama model will it load? 13B or 7B
@created-Bi It will be help you: https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing
@ChristianAchenbach4815 - The 13B model download URL is incorrect. The right URL is !wget https://huggingface.co/wangrongsheng/MiniGPT4/resolve/main/pretrained_minigpt4.pth
(note "resolve/main" instead of "blob/main")
The "blob/main" URL is an HTML page, hence the error
!python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
...
UnpicklingError: invalid load key, '<'
After this tiny change, I see no issue on Colab (running on A100) - Thanks @WangRongsheng ๐ฅ
hello, when I running with wangrongsheng/MiniGPT-4-LLaMA-7B, an error happened that the shape of the weight and bias in the llama_proj module in original minigpt4 mismatched(4096 vs 5120). Thus, I'm wondering if you change the shape of the weight and bias in the llama_proj module?
@created-Bi Please give me more error information. I can't repeat this error.
omg, this colab is GARBAGE, sorry but it so hard to use, don't commit half finished products
I know it's harsh but why on earth to use this colab we need to:
@ArtemBernatskyy Here are some points to clarify:
@
@created-Bi Please give me more error information. I can't repeat this error.
Hi, I got the same error, here is the error information.
/usr/local/lib/python3.10/dist-packages/requests/init.py:102: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported " 2023-05-01 08:46:15.855567: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Initializing Chat Loading VIT Loading VIT Done Loading Q-Former Loading Q-Former Done Loading LLAMA
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/lib/python3.10/dist-packages/cv2/../../lib64')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:105: UserWarning: /usr/local/lib/python3.10/dist-packages/cv2/../../lib64:/usr/lib64-nvidia did not contain libcudart.so as expected! Searching further paths... warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('http'), PosixPath('8013')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-v100-hm-2nxtjzw2zpl6c --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')} warn( CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118_nocublaslt.so... Loading checkpoint shards: 100% 3/3 [02:17<00:00, 45.86s/it] Downloading (โฆ)neration_config.json: 100% 137/137 [00:00<00:00, 96.5kB/s] Loading LLAMA Done Load 4 training prompts Prompt Example
Load BLIP2-LLM Checkpoint: /content/MiniGPT-4/prerained_minigpt4_7b.pth
โญโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโฎ
โ /content/MiniGPT-4/demo.py:60 in
@WangRongsheng
Have you tried to run the finetune(stage 2) on colab? Such this command !torchrun --nproc-per-node 1 train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml
If I turn on the "low_resource: True" on the config minigpt4_stage2_finetune.yaml, The following issue about gpu and cpu happend>>>
/usr/local/lib/python3.10/dist-packages/requests/init.py:102: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported " 2023-05-01 12:30:28.549952: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT | distributed init (rank 0, world 1): env:// 2023-05-01 12:30:30,840 [INFO] ===== Running Parameters ===== 2023-05-01 12:30:30,841 [INFO] { "amp": true, "batch_size_eval": 6, "batch_size_train": 6, "device": "cuda", "dist_backend": "nccl", "dist_url": "env://", "distributed": true, "evaluate": false, "gpu": 0, "init_lr": 3e-05, "iters_per_epoch": 200, "lr_sched": "linear_warmup_cosine_lr", "max_epoch": 5, "min_lr": 1e-05, "num_workers": 2, "output_dir": "output/minigpt4_stage2_finetune", "rank": 0, "resume_ckpt_path": null, "seed": 42, "task": "image_text_pretrain", "train_splits": [ "train" ], "warmup_lr": 1e-06, "warmup_steps": 20, "weight_decay": 0.05, "world_size": 1 } 2023-05-01 12:30:30,841 [INFO] ====== Dataset Attributes ====== 2023-05-01 12:30:30,841 [INFO] ======== cc_sbu_align ======= 2023-05-01 12:30:30,841 [INFO] { "build_info": { "storage": "/content/cc_sbu_align/cc_sbu_align/" }, "data_type": "images", "text_processor": { "train": { "name": "blip_caption" } }, "vis_processor": { "train": { "image_size": 224, "name": "blip2_image_train" } } } 2023-05-01 12:30:30,841 [INFO] ====== Model Attributes ====== 2023-05-01 12:30:30,842 [INFO] { "arch": "mini_gpt4", "ckpt": "/content/MiniGPT-4/prerained_minigpt4_7b.pth", "drop_path_rate": 0, "end_sym": "###", "freeze_qformer": true, "freeze_vit": true, "image_size": 224, "llama_model": "wangrongsheng/MiniGPT-4-LLaMA-7B", "low_resource": true, "max_txt_len": 160, "model_type": "pretrain_vicuna", "num_query_token": 32, "prompt": "", "prompt_path": "prompts/alignment.txt", "prompt_template": "###Human: {} ###Assistant: ", "use_grad_checkpoint": false, "vit_precision": "fp16" } 2023-05-01 12:30:30,842 [INFO] Building datasets... Loading VIT 2023-05-01 12:30:57,246 [INFO] freeze vision encoder Loading VIT Done Loading Q-Former 2023-05-01 12:31:02,726 [INFO] load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/blip2_pretrained_flant5xxl.pth 2023-05-01 12:31:02,733 [INFO] freeze Qformer Loading Q-Former Done Loading LLAMA
/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/lib/python3.10/dist-packages/cv2/../../lib64')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:105: UserWarning: /usr/local/lib/python3.10/dist-packages/cv2/../../lib64:/usr/lib64-nvidia did not contain libcudart.so as expected! Searching further paths... warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('8013'), PosixPath('//172.28.0.1'), PosixPath('http')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-v100-hm-2nxtjzw2zpl6c --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/tmp/torchelastic__pzc3ueu/nonex0s6uvw/attempt_0/0/error.json')} warn( CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118_nocublaslt.so... Loading checkpoint shards: 100% 2/2 [01:11<00:00, 35.64s/it] Loading LLAMA Done Load 4 training prompts Prompt Example
Failures:
why # Vicuna llama_model: "wangrongsheng/MiniGPT-4-LLaMA" , this model need to connect Internet When I run locally ??
why # Vicuna llama_model: "wangrongsheng/MiniGPT-4-LLaMA" , this model need to connect Internet When I run locally ??
Model weights are downloaded automatically, so you must be online.
Thank you for the awesome work! One question, how can I change the download location for transformer models?
ๆ่ฐข็ฒพๅฝฉๅไบซ๏ผๅทฒ็ป่ท่ตทๆฅไบใๅฆๅค๏ผๆไบ่ฟไธชๆๆฏไธๆฏไนๅฏไปฅ้จ็ฝฒไธไธชๅ็ฌ็Vicuna๏ผ
can you share the code on how to use miniGPT4 on colab without gradio interface? Thank you!
can you share the code on how to use miniGPT4 on colab without gradio interface? Thank you! there are a demo you can try https://colab.research.google.com/drive/1VUzWoaGQoEx6OxgcRD742EbMpNlhAPHM?usp=sharing
Dear @youyuanrsq, Thank you!
After setting the llama_model
and ckpt
parameters it works!
@WangRongsheng git lfs pullๆ้็ๆถๅไผๆฅ่ฟไธช้
can you share the code on how to use miniGPT4-V2 on colab without gradio interface? Thank you!
Use MiniGPT-4 in Colab
If you want to use miniGPT-4 in Google Colab, You must use GPU and you are a Google Colab Pro user, Otherwise you will not be able to use colab!
Use MiniGPT-4 in your computer
clone repo:
install package:
requirements.txt
are stored in WangRongsheng/Use-LLMs-in-Colab .set config
run minigpt-4
Have good fun!