Open CSEEduanyu opened 3 days ago
Please provide more details about your case. Do you modify anything or directly run the
inference.py
?
yes,I just changed the loading of the visual model locally, my changes should not affect the randomness.
Can you provide thecode after modification?
local_bin = self.vision_tower_name + "/open_clip_pytorch_model.bin" clip_model, processor = create_model_from_pretrained("convnext_xxlarge",pretrained=local_bin) just like this
I think the problem probably comes from this local loading. But without your exact code and weight, I can't not reproduce your problem.
I think the problem probably comes from this local loading. But without your exact code and weight, I can't not reproduce your problem.
how can i try webdemo of cam34b?just like https://internvl.opengvlab.com/
We will release the public demo very soon.
We will release the public demo very soon.
after some logging,i found same picture after CLIP-ConvNext get different embeding,this is why output get diff.Other vison model is ok ,What's so special about CLIP-ConvNext?
@penghao-wu Or about when will the demo be available online? I wanted to verify that the offline deployment was correct
open_clip model need clip_model.eval() after create_model_from_pretrained()
Please provide more details about your case. Do you modify anything or directly run the
inference.py
?