dvlab-research / MGM

Official repo for "Mini-Gemini: Mining the Potential of Multi-modality Vision Language Models"
Apache License 2.0
3.22k stars 280 forks source link

OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory model_zoo/OpenAI/clip-vit-large-patch14-336. #36

Open hanshantong opened 7 months ago

hanshantong commented 7 months ago

Hello, everyone, I am using MiniGemini evaluation on an image by typing command:

python -m minigemini.serve.cli  --model-path ./Mini-Gemini-2B/     --image-file replaced_with_path_to_image

then following OSError emerged:

Traceback (most recent call last):
  File "home_path/anaconda3/envs/minigeimini/lib/python3.10/runpy.py", line 196, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "home_path/anaconda3/envs/minigeimini/lib/python3.10/runpy.py", line 86, in _run_code
    exec(code, run_globals)
  File "/data1/code_path/Projects/github/MiniGemini/minigemini/serve/cli.py", line 237, in <module>
    main(args)
  File "/data1/code_path/Projects/github/MiniGemini/minigemini/serve/cli.py", line 56, in main
    tokenizer, model, image_processor, context_len = load_pretrained_model(args.model_path, args.model_base, model_name, args.load_8bit, args.load_4bit, device=args.device)
  File "/data1/code_path/Projects/github/MiniGemini/minigemini/model/builder.py", line 112, in load_pretrained_model
    vision_tower.load_model()
  File "/data1/code_path/Projects/github/MiniGemini/minigemini/model/multimodal_encoder/clip_encoder.py", line 33, in load_model
    self.vision_tower = CLIPVisionModel.from_pretrained(self.vision_tower_name)
  File "home_path/anaconda3/envs/minigeimini/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3144, in from_pretrained
    raise EnvironmentError(
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory model_zoo/OpenAI/clip-vit-large-patch14-336.

Some internet friends share their solutions: Downloading the corresponding "xxx.index.json" file, but I can't find any "xxx.index.json" file of "clip-vit-large-patch14-336" on the huggingface website.

I think maybe the relative path leads to the probelm, so I replace it with absolute path, but the same problem is shot.

My environment: OS: ubuntu 22.04 64 bit python: 3.10.14 others: other libraries are installed according to MiniGemini's official installation guide.

Does anybody have a solution for that, I will be very grateful for you, thank u.

yanwei-li commented 7 months ago

Hi, please download the clip-vit-large-patch14-336 and the OpenCLIP-ConvNeXt-L. Put them to MiniGemini/model_zoo/OpenAI/clip-vit-large-patch14-336 and MiniGemini/model_zoo/OpenAI/openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup, respectively.

hanshantong commented 7 months ago

I have downloaded these two model files from huggingface mirror site "https://hf-mirror.com/", and put them "MiniGemini/model_zoo/OpenAI", but the same problem occured.

yanwei-li commented 7 months ago

Hi, I'm not sure whether the files from "https://hf-mirror.com/" are exactly aligned. But I did not find such error when using the official hf file.

DILIU1 commented 6 months ago

Hi, please download the clip-vit-large-patch14-336 and the OpenCLIP-ConvNeXt-L. Put them to MiniGemini/model_zoo/OpenAI/clip-vit-large-patch14-336 and MiniGemini/model_zoo/OpenAI/openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup, respectively.

preprocessor_config.json and config.json can not be found in OpenAI/openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup

zhanqan commented 1 month ago

Hi, please download the clip-vit-large-patch14-336 and the OpenCLIP-ConvNeXt-L. Put them to MiniGemini/model_zoo/OpenAI/clip-vit-large-patch14-336 and MiniGemini/model_zoo/OpenAI/openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup, respectively.

preprocessor_config.json and config.json can not be found in OpenAI/openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup

I also encountered the same problem, may I ask if you have solved this problem?