InternLM / InternLM-XComposer

InternLM-XComposer2 is a groundbreaking vision-language large model (VLLM) excelling in free-form text-image composition and comprehension.
1.92k stars 121 forks source link

无法连接huggingface找到openai/clip-vit-large-patch14-336,请问这个模型离线怎么跑推理demo? #271

Closed changqinyao closed 2 months ago

changqinyao commented 2 months ago

openai/clip-vit-large-patch14-336路径封装在库里面,离线环境无法跑推理。

LightDXY commented 2 months ago

you could download the vit to your local path and set the path in the build_mlp.py to the local path

deku0818 commented 2 months ago

you could download the vit to your local path and set the path in the build_mlp.py to the local path

我修改了文件 vim /root/.cache/huggingface/modules/transformers_modules/build_mlp.py 并且将vision_tower = 'openai/clip-vit-large-patch14-336' 替换为了本地路径,但是再次执行就被重写覆盖了

xiaoze1332 commented 1 month ago

you could download the vit to your local path and set the path in the build_mlp.py to the local path

我修改了文件 vim /root/.cache/huggingface/modules/transformers_modules/build_mlp.py 并且将vision_tower = 'openai/clip-vit-large-patch14-336' 替换为了本地路径,但是再次执行就被重写覆盖了

你要修改下载的模型文件中的"build_mlp.py",而不是修改.cache文件夹中的"build_mlp.py"