Closed dimwael closed 1 year ago
I experienced the same issue! Were you able to fix it?
I fixed it by deleting the pretrained_minigpt4.pth
and download it again directly from the drive. For that you may use this approach :
pip install gdown
Finally, download the file using gdown and the intended ID:
gdown --id 1a4zLvaiDBr-36pasffmgpvH5P7CKmpze
I hope that helps !
It worked! Thank you very much ....
I am running the 13B version on a sagemaker instance.
!python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0
Initializing Chat Loading VIT Loading VIT Done Loading Q-Former Loading Q-Former Done Loading LLAMA
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
Loading checkpoint shards: 100%|██████████████████| 3/3 [02:46<00:00, 55.49s/it] Loading LLAMA Done Load 4 training prompts Prompt Example
Human: Please provide a detailed description of the picture. ###Assistant:
Load BLIP2-LLM Checkpoint: pretrained_minigpt4.pth ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /home/ec2-user/SageMaker/MiniGPT-4/demo.py:60 in │
│ │
│ 57 model_config = cfg.model_cfg │
│ 58 model_config.device_8bit = args.gpu_id │
│ 59 model_cls = registry.get_model_class(model_config.arch) │
│ ❱ 60 model = model_cls.from_config(model_config).to('cuda:{}'.format(args.g │
│ 61 │
│ 62 vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train │
│ 63 vis_processor = registry.get_processor_class(vis_processor_cfg.name).f │
│ │
│ /home/ec2-user/SageMaker/MiniGPT-4/minigpt4/models/mini_gpt4.py:265 in │
│ from_config │
│ │
│ 262 │ │ ckpt_path = cfg.get("ckpt", "") # load weights of MiniGPT-4 │
│ 263 │ │ if ckpt_path: │
│ 264 │ │ │ print("Load BLIP2-LLM Checkpoint: {}".format(ckpt_path)) │
│ ❱ 265 │ │ │ ckpt = torch.load(ckpt_path, map_location="cpu") │
│ 266 │ │ │ msg = model.load_state_dict(ckpt['model'], strict=False) │
│ 267 │ │ │
│ 268 │ │ return model │
│ │
│ /home/ec2-user/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/ │
│ serialization.py:795 in load │
│ │
│ 792 │ │ │ │ return _legacy_load(opened_file, map_location, _weigh │
│ 793 │ │ │ except RuntimeError as e: │
│ 794 │ │ │ │ raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) │
│ ❱ 795 │ │ return _legacy_load(opened_file, map_location, pickle_module, │
│ 796 │
│ 797 │
│ 798 # Register pickling support for layout instances such as │
│ │
│ /home/ec2-user/anaconda3/envs/pytorch_p39/lib/python3.9/site-packages/torch/ │
│ serialization.py:1002 in _legacy_load │
│ │
│ 999 │ │ │ f"Received object of type \"{type(f)}\". Please update to │
│ 1000 │ │ │ "functionality.") │
│ 1001 │ │
│ ❱ 1002 │ magic_number = pickle_module.load(f, pickle_load_args) │
│ 1003 │ if magic_number != MAGIC_NUMBER: │
│ 1004 │ │ raise RuntimeError("Invalid magic number; corrupt file?") │
│ 1005 │ protocol_version = pickle_module.load(f, pickle_load_args) │
╰──────────────────────────────────────────────────────────────────────────────╯
UnpicklingError: invalid load key, '<'.