Closed anilsathyan7 closed 7 months ago
Hi!
FashionCLIP mainly uses the HuggingFace interface for models, so it would be better to use.
FashionCLIP('openai/clip-vit-base-patch32')
or a model saved in that form.
If you want to use CLIP-format you need to install the clip package form openai
# else it doesn't use HF, assume using OpenAI CLiP
else:
if os.path.isfile(name):
model_path = name
elif validators.url(name):
# generic url or S3 path
model_path = _download(_MODELS[name], _CACHE_DIR)
else:
raise RuntimeError(f"Model {name} not found or not valid; available models = {list(_MODELS.keys())}")
model, preprocessing = clip.load(model_path, device=device, download_root=_CACHE_DIR)
Ok, in the codebase it there was a file path reference in '_load_model'.
sorry, do you mind giving me additional details regarding your request?
Can we load a model from a path to a local checkpoint(pytorch)?
Yes, but it needs to be in the correct format, best if it is an HF checkpoint
Unable to load custom /openai clip. (https://github.com/openai/CLIP) Is it possible to load open_clip models also (https://github.com/mlfoundations/open_clip)?