Closed chaoshine closed 9 months ago
https://towhee.io/sentence-embedding/transformers Just pass the local Hugging Face model directory
https://towhee.io/sentence-embedding/transformers Just pass the local Hugging Face model directory
Hello, There is a question may related this issue , how can I config this model loaded from my folder (not from remote url)
https://towhee.io/sentence-embedding/transformers Just pass the local Hugging Face model directory
Hello, There is a question may related this issue , how can I config this model loaded from my folder (not from remote url)
Pass weights by checkpoint_path
https://towhee.io/sentence-embedding/transformers Just pass the local Hugging Face model directory
Hello, There is a question may related this issue , how can I config this model loaded from my folder (not from remote url)
Pass weights by
checkpoint_path
i use this, but got an error.
import sys
from towhee import pipe, ops
import json
image_path = sys.argv[1]
img_pipe = (
pipe.input('url')
.map('url', 'img', ops.image_decode.cv2_rgb())
.map('img', 'vec', ops.image_text_embedding.clip(
model_name='clip_vit_large_patch14_336',
modality='image',
checkpoint_path='~/local/huggingface/hub/models--openai--clip-vit-large-patch14-336/'))
.output('img', 'vec')
)
That you can see the error logs:
AttributeError: type object 'CLIPModel' has no attribute 'from_config'
I find yours clip.py, and I'm not sure is my fault or yours.
def create_model(model_name, modality, checkpoint_path, device):
if checkpoint_path is None:
hf_clip_model = CLIPModel.from_pretrained(model_name)
else:
hf_clip_config = CLIPModel.from_config(model_name)
hf_clip_model = CLIPModel.from_pretrained(
checkpoint_path, config=hf_clip_config)
hf_clip_model.to(device)
hf_clip_model.eval()
if modality == 'image':
clip = CLIPModelVision(hf_clip_model)
elif modality == 'text':
clip = CLIPModelText(hf_clip_model)
else:
raise ValueError("modality[{}] not implemented.".format(modality))
return clip
@wxywb
@wxywb
so, what's next?
@HarwordLiu ,I've updated the operator, please remove your cached operator and try it again.
@HarwordLiu ,I've updated the operator, please remove your cached operator and try it again.
i saw u remove the config, before u update, i try like u do in my local, and i got the same error.
Repo id must be in the form 'repo_name' or 'namespace/repo_name': '~/local/huggingface/hub/models--openai--clip-vit-large-patch14-336/'. Use `repo_type` argument if needed.
i guess maybe my local model cache file's root path is wrong? let me show u my local model cache files. my local path is direct here:
@HarwordLiu ,It seems you want use your cached weights, in such case, you can use clip directly. the path is useful when you have your saved customized clip model.
img_pipe = (
pipe.input('url')
.map('url', 'img', ops.image_decode.cv2_rgb())
.map('img', 'vec', ops.image_text_embedding.clip(
model_name='clip_vit_large_patch14_336',
modality='image'))
.output('img', 'vec')
)
save the model
from PIL import Image
from transformers import CLIPProcessor,CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14-336")
model.save_pretrained('models--openai--clip-vit-large-patch14-336')
load from it.
img_pipe = (
pipe.input('url')
.map('url', 'img', ops.image_decode.cv2_rgb())
.map('img', 'vec', ops.image_text_embedding.clip(
model_name='clip_vit_large_patch14_336',
checkpoint_path='./models--openai--clip-vit-large-patch14-336',
modality='image'))
.output('img', 'vec')
)
@HarwordLiu ,It seems you want use your cached weights, in such case, you can use clip directly. the path is useful when you have your saved customized clip model.
img_pipe = ( pipe.input('url') .map('url', 'img', ops.image_decode.cv2_rgb()) .map('img', 'vec', ops.image_text_embedding.clip( model_name='clip_vit_large_patch14_336', modality='image')) .output('img', 'vec') )
save the model
from PIL import Image from transformers import CLIPProcessor,CLIPModel model = CLIPModel.from_pretrained("openai/clip-vit-large-patch14-336") model.save_pretrained('models--openai--clip-vit-large-patch14-336')
load from it.
img_pipe = ( pipe.input('url') .map('url', 'img', ops.image_decode.cv2_rgb()) .map('img', 'vec', ops.image_text_embedding.clip( model_name='clip_vit_large_patch14_336', checkpoint_path='./models--openai--clip-vit-large-patch14-336', modality='image')) .output('img', 'vec') )
Thanks for all your help. and it's helpful, it's work now.
@wxywb Why still download the file from 'huggingface' after successfully loading the model? And this is my test log: I think if i use local model, should not download any files from 'huggingface' any more.
@HarwordLiu ,this is because of tokenizer(that's why we need model_name). Usually saved model folder would not contain tokenizer, so we provide model_name to specify tokenizer explicitly. If you have a folder contains tokenizer(something like vocab.json), you can fill model_name with that path and let tokenizer and processor loaded from that path.
@HarwordLiu ,this is because of tokenizer(that's why we need model_name). Usually saved model folder would not contain tokenizer, so we provide model_name to specify tokenizer explicitly. If you have a folder contains tokenizer(something like vocab.json), you can fill model_name with that path and let tokenizer and processor loaded from that path.
Amazing, i can't believe it, althought it's worked.
Fill model_name by file path, it's look so hack.
This hack is beyond the initial design. Since huggingface transformer will connect to server although it cached the tokenizer.
This hack is beyond the initial design. Since huggingface transformer will connect to server although it cached the tokenizer.
That’s it, no wonder I replaced the cached file for server (~/.cache) but still connect to huggingface to download. Think you for all.
https://towhee.io/sentence-embedding/transformers Just pass the local Hugging Face model directory
Hello, There is a question may related this issue , how can I config this model loaded from my folder (not from remote url)
.map('img', 'embedding', ops.image_embedding.timm(model_name='resnet50')) Can not access huggingface, how to solve the problem?
@ycqu , network issue is beyond the framework's capability.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. Rotten issues close after 30d of inactivity. Close the stale issues and pull requests after 7 days of inactivity. Reopen the issue with /reopen
.
Is there an existing issue for this?
Is your feature request related to a problem? Please describe.
Our server has issues accessing the Hugging Face website. Therefore, is it possible to download the model to the local machine and then continue working by adjusting the local loading of the model using Towhee? Is so, how? Thanks. 我们的服务器访问huggingface网站有问题。所以,是否可以下载模型到本地,然后通过调整Towhee在本地加载模型的形式来继续工作吗? 如果可以的话,怎么操作呢?谢谢!
Describe the solution you'd like.
No response
Describe an alternate solution.
No response
Anything else? (Additional Context)
No response