OFA-Sys / Chinese-CLIP

Chinese version of CLIP which achieves Chinese cross-modal retrieval and representation generation.
MIT License
4.58k stars 473 forks source link

ChineseCLIPVisionModel in huggingface can used? #323

Open ranck626 opened 5 months ago

ranck626 commented 5 months ago

(cropa) root@v3-custom-667bbc1c8454e110f8bda977-7ldhs:/# python /data/codes/test.py Downloading config.json: 3.01kB [00:00, 215kB/s]
Downloading pytorch_model.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 753M/753M [00:58<00:00, 12.9MB/s] Downloading (…)rocessor_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 342/342 [00:00<00:00, 130kB/s] Traceback (most recent call last): File "/data/codes/test.py", line 6, in processor = CLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") File "/data/miniconda3/envs/cropa/lib/python3.9/site-packages/transformers/processing_utils.py", line 226, in from_pretrained args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, kwargs) File "/data/miniconda3/envs/cropa/lib/python3.9/site-packages/transformers/processing_utils.py", line 270, in _get_arguments_from_pretrained args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, kwargs)) File "/data/miniconda3/envs/cropa/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1838, in from_pretrained raise EnvironmentError( OSError: Can't load tokenizer for 'OFA-Sys/chinese-clip-vit-base-patch16'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'OFA-Sys/chinese-clip-vit-base-patch16' is the correct path to a directory containing all relevant files for a CLIPTokenizerFast tokenizer.

ranck626 commented 5 months ago

或者说,有什么办法能直只下载ChineseCLIP的image encoder。 我可以只针对image endcoder 进行微调

ranck626 commented 5 months ago

好像是tokenizer的问题,换成AutoProcessor好使了

ranck626 commented 5 months ago

from PIL import Image import requests from transformers import CLIPProcessor, ChineseCLIPVisionModel, AutoProcessor

model = ChineseCLIPVisionModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")

processor = AutoProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16")

url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" image = Image.open(requests.get(url, stream=True).raw)

inputs = processor(images=image, return_tensors="pt")

outputs = model(**inputs) last_hidden_state = outputs.last_hidden_state pooled_output = outputs.pooler_output # pooled CLS states print(len(pooled_output))