Open alachyan-ml opened 1 year ago
Hi @alachyan-ml, thank you for the report!
I run
optimum-cli export onnx -m openai/clip-vit-base-patch32 --framework pt clip_onnx
with
torch==2.1.0
optimum==1.13.2
transformers==4.34.0
and can not reproduce this issue. Could you give more details about the issue you are facing?
@fxmarty the model in your example is from OpenAI's clip, but not open_clip. I ran
optimum-cli export onnx -m laion/CLIP-ViT-B-32-laion2B-s34B-b79K --framework pt clip_onnx
with
optimum==1.13.2
torch==2.0.0
transformers==4.34.0
and got the same error message as @alachyan-ml did.
Hi @isaac-chung, I used the model id openai/clip-vit-base-patch32
shared by @alachyan-ml: https://github.com/huggingface/optimum/issues/1450#issue-1943412047
Let me have a look at laion/CLIP-ViT-B-32-laion2B-s34B-b79K
and get back to you.
@fxmarty it doesn't seem like @alachyan-ml actually provided the command they ran, as they only mentioned that the open AI one works with no issue, which we confirmed 👌
Open AI CLIP exporting working as expected without any problems.
% optimum-cli export onnx -m openai/clip-vit-base-patch32 --framework pt ./clip.onnx
Thanks for taking a look! I started poking around and tried adding the following, still WIP with no success yet: in class TasksManager:
+ _OPEN_CLIP_TASKS_TO_MODEL_LOADERS = {
+ "zero-shot-image-classification": "create_model_and_transforms",
+ }
_LIBRARY_TO_TASKS_TO_MODEL_LOADER_MAP = {
"transformers": _TRANSFORMERS_TASKS_TO_MODEL_LOADERS,
"diffusers": _DIFFUSERS_TASKS_TO_MODEL_LOADERS,
"timm": _TIMM_TASKS_TO_MODEL_LOADERS,
+ "open_clip": _OPEN_CLIP_TASKS_TO_MODEL_LOADERS,
}
In TasksManager.get_model_from_task:
+ if library_name == "open_clip":
+ model, _, _ = model_class(f"hf-hub:{model_name_or_path}", cache_dir=cache_dir)
+ TasksManager.standardize_model_attributes(
+ model_name_or_path, model, subfolder, revision, cache_dir, library_name
+ )
+ return model
Here is where I'm at currently:
(v1) isaacchung@Isaacs-MBP optimum % optimum-cli export onnx -m laion/CLIP-ViT-B-32-laion2B-s34B-b79K --framework pt clip_onnx
Traceback (most recent call last):
File "/Users/isaacchung/virtualenv/v1/bin/optimum-cli", line 8, in <module>
sys.exit(main())
File "/Users/isaacchung/work/optimum/optimum/commands/optimum_cli.py", line 163, in main
service.run()
File "/Users/isaacchung/work/optimum/optimum/commands/export/onnx.py", line 239, in run
main_export(
File "/Users/isaacchung/work/optimum/optimum/exporters/onnx/__main__.py", line 354, in main_export
model_type = "stable-diffusion" if is_stable_diffusion else model.config.model_type.replace("_", "-")
File "/Users/isaacchung/virtualenv/v1/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'CLIP' object has no attribute 'config'
I could gather these into a PR and maybe we could continue the discussion for the fix there?
For sure, thank you @isaac-chung!
Hi, sorry was away for a few days, but @isaac-chung is correct. I forgot to include the command that yielded the error but the laion/CLIP-ViT-B-32-laion2B-s34B-b79K model is the one I was trying to export. Took a look at the thread and seems like it is being looked at. Thanks!
Hi, any news on this one? I was trying to convert the same model (CLIP-ViT-B-32-laion2B-s34B-b79K) with the exact same error.
Hi @alachyan-ml , @nemphys , @fxmarty , @isaac-chung , Does the code changes work for you guys?
Hey, picking this up again let me take a look.
System Info
Who can help?
@michaelbenayoun
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction (minimal, reproducible, runnable)