InternLM / InternLM-XComposer

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
2.06k stars 127 forks source link

huggingface的代码示例报错 #229

Open sssssshf opened 3 months ago

sssssshf commented 3 months ago

这是我的文件结构图,里面已经把模型下载好了 image 但我用怕跑的时候报这个错误 /home/shf/anaconda3/envs/llama/bin/python /media/shf/sda/code/InternLM-XComposer-main/test.py You are using a model of type internlmxcomposer2 to instantiate a model of type internlm. This is not supported for all configurations of models and can yield errors. [2024-03-25 14:12:17,398] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Set max length to 4096 Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 714, in urlopen httplib_response = self._make_request( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 403, in _make_request self._validate_conn(conn) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1053, in _validate_conn conn.connect() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 186, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 798, in urlopen retries = retries.increment( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download metadata = get_hf_file_metadata( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(args, kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1667, in get_hf_file_metadata r = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper response = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper response = get_session().request(method=method, url=url, params) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 67, in send return super().send(request, args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 0f1ae55b-e895-4ed1-af62-de65d803ba20)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 398, in cached_file resolved_file = hf_hub_download( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1406, in hf_hub_download raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/media/shf/sda/code/InternLM-XComposer-main/test.py", line 7, in model = AutoModel.from_pretrained('internlm/internlm-xcomposer2-vl-7b',local_files_only=True , trust_remote_code=True).cuda().eval() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 556, in from_pretrained return model_class.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3375, in from_pretrained model = cls(config, *model_args, model_kwargs) File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/modeling_internlm_xcomposer2.py", line 67, in init self.vit = build_vision_tower() File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 11, in build_vision_tower return CLIPVisionTower(vision_tower) File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 58, in init self.load_model() File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 62, in load_model self.vision_tower = CLIPVisionModel.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2981, in from_pretrained config, model_kwargs = cls.config_class.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/models/clip/configuration_clip.py", line 251, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 633, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 688, in _get_config_dict resolved_config_file = cached_file( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 441, in cached_file raise EnvironmentError( OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14-336 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. (llama) shf@shf-Z790-UD:/media/shf/sda/code/InternLM-XComposer-main$

请问这是为什么 该如何解决 并让它正常进行推理呢

sssssshf commented 3 months ago

用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.

chuangzhidan commented 3 months ago

File "/root/miniconda3/lib/python3.8/site-packages/transformers/utils/hub.py", line 429, in cached_file raise EnvironmentError( OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14-336 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. 我也遇到这个问题

chuangzhidan commented 3 months ago

这是我的文件结构图,里面已经把模型下载好了 image 但我用怕跑的时候报这个错误 /home/shf/anaconda3/envs/llama/bin/python /media/shf/sda/code/InternLM-XComposer-main/test.py You are using a model of type internlmxcomposer2 to instantiate a model of type internlm. This is not supported for all configurations of models and can yield errors. [2024-03-25 14:12:17,398] [INFO] [real_accelerator.py:158:get_accelerator] Setting ds_accelerator to cuda (auto detect) Set max length to 4096 Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 174, in _new_conn conn = connection.create_connection( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 95, in create_connection raise err File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection sock.connect(sa) OSError: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 714, in urlopen httplib_response = self._make_request( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 403, in _make_request self._validate_conn(conn) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1053, in _validate_conn conn.connect() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 363, in connect self.sock = conn = self._new_conn() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connection.py", line 186, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/connectionpool.py", line 798, in urlopen retries = retries.increment( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/urllib3/util/retry.py", line 592, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1261, in hf_hub_download metadata = get_hf_file_metadata( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(args, kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1667, in get_hf_file_metadata r = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 385, in _request_wrapper response = _request_wrapper( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 408, in _request_wrapper response = get_session().request(method=method, url=url, params) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 67, in send return super().send(request, args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /openai/clip-vit-large-patch14-336/resolve/main/config.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fd9260ae160>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 0f1ae55b-e895-4ed1-af62-de65d803ba20)')

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 398, in cached_file resolved_file = hf_hub_download( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1406, in hf_hub_download raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/media/shf/sda/code/InternLM-XComposer-main/test.py", line 7, in model = AutoModel.from_pretrained('internlm/internlm-xcomposer2-vl-7b',local_files_only=True , trust_remote_code=True).cuda().eval() File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 556, in from_pretrained return model_class.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/modeling_utils.py", line 3375, in from_pretrained model = cls(config, *model_args, model_kwargs) File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/modeling_internlm_xcomposer2.py", line 67, in init self.vit = build_vision_tower() File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 11, in build_vision_tower return CLIPVisionTower(vision_tower) File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 58, in init self.load_model() File "/home/shf/.cache/huggingface/modules/transformers_modules/internlm-xcomposer2-vl-7b/build_mlp.py", line 62, in load_model self.vision_tower = CLIPVisionModel.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2981, in from_pretrained config, model_kwargs = cls.config_class.from_pretrained( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/models/clip/configuration_clip.py", line 251, in from_pretrained config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 633, in get_config_dict config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/configuration_utils.py", line 688, in _get_config_dict resolved_config_file = cached_file( File "/home/shf/anaconda3/envs/llama/lib/python3.8/site-packages/transformers/utils/hub.py", line 441, in cached_file raise EnvironmentError( OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like openai/clip-vit-large-patch14-336 is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. (llama) shf@shf-Z790-UD:/media/shf/sda/code/InternLM-XComposer-main$

请问这是为什么 该如何解决 并让它正常进行推理呢

你解决了吗

TAOSHss commented 3 months ago

用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.

我也遇到了这个问题

TAOSHss commented 3 months ago

用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.

@panzhang0212 作者能解答一下吗

mirrorboat commented 3 months ago

我在微调ShareGPT4V时也遇到类似问题

问题描述

对预训练模型进行Instruction Tuning,执行train_mem.py时传入的本地LLM&projector和Visual Encoder模型权重路径均正确,但是代码无法正确找到Visual Encoder的本地权重

原因分析——微调模型背后的代码逻辑

model.get_model().initialize_vision_modules(
            model_args=model_args,
            fsdp=training_args.fsdp
        )

解决方法

手动将下载到本地的LLM+projector权重文件的config.json中的'mm_vision_tower'`修改为Visual Encoder权重文件的本地路径。

chuangzhidan commented 3 months ago

config.json的'mm_vision_tower'`

我在分享GPT4V时也遇到类似问题

问题描述

对预训练模型进行指令调优,执行train_mem.py时确定本地LLM&projector和Visual Encoder模型权重路径均正确,但代码无法找到正确Visual Encoder的本地权重

原因分析——控制器模型背后的代码逻辑

  • 运行train_mem.py
  • train_mem.py调用train.py
  • train_mem.py在第938行附近调用share4v_arch.py的函数(如下)
model.get_model().initialize_vision_modules(
            model_args=model_args,
            fsdp=training_args.fsdp
        )
  • share4v_arch.py​​在第36行附近调用builder.py的函数build_vision_tower(model_args),该函数的定义中包含以下代码
vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))

这行代码的意思是优先从LLM+投影机的权重文件(一般从huggingface下载到本地)中的配置文件config.json中查找属性'mm_vision_tower'(以7B模型为例,在hf仓库中,该属性的值"Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12"),如果查找则查找失败,则从用户通过命令行建立的参数中查找'vision_tower'属性(因此我们建立的Visual Encoder的本地权限重文件夹的路径)。 总而言之,可能代码使得在加载Visual Encoder时优先从huggingface远程仓库中下载权限重,而不是使用已经下载到本地的权限重。

解决方法

手动将下载到本地的LLM+投影机权重文件中config.json的'mm_vision_tower'`为Visual Encoder修改权重文件的本地路径。

config.json没有'mm_vision_tower'`吧,没搜到

mirrorboat commented 3 months ago

config.json的'mm_vision_tower'`

我在分享GPT4V时也遇到类似问题

问题描述

对预训练模型进行指令调优,执行train_mem.py时确定本地LLM&projector和Visual Encoder模型权重路径均正确,但代码无法找到正确Visual Encoder的本地权重

原因分析——控制器模型背后的代码逻辑

  • 运行train_mem.py
  • train_mem.py调用train.py
  • train_mem.py在第938行附近调用share4v_arch.py的函数(如下)
model.get_model().initialize_vision_modules(
            model_args=model_args,
            fsdp=training_args.fsdp
        )
  • share4v_arch.py​​在第36行附近调用builder.py的函数build_vision_tower(model_args),该函数的定义中包含以下代码
vision_tower = getattr(vision_tower_cfg, 'mm_vision_tower', getattr(vision_tower_cfg, 'vision_tower', None))

这行代码的意思是优先从LLM+投影机的权重文件(一般从huggingface下载到本地)中的配置文件config.json中查找属性'mm_vision_tower'(以7B模型为例,在hf仓库中,该属性的值"Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12"),如果查找则查找失败,则从用户通过命令行建立的参数中查找'vision_tower'属性(因此我们建立的Visual Encoder的本地权限重文件夹的路径)。 总而言之,可能代码使得在加载Visual Encoder时优先从huggingface远程仓库中下载权限重,而不是使用已经下载到本地的权限重。

解决方法

手动将下载到本地的LLM+投影机权重文件中config.json的'mm_vision_tower'`为Visual Encoder修改权重文件的本地路径。

config.json没有'mm_vision_tower'`吧,没搜到

我是在微调ShareGPT4V,https://huggingface.co/Lin-Chen/ShareGPT4V-7B_Pretrained_vit-large336-l12_vicuna-7b-v1.5/blob/main/config.json 可以看到'mm_vision_tower',如果你是在使用InternLM-XComposer2可能会有所不同

chuangzhidan commented 2 months ago

InternLM-XComposer2

是的,我用的是InternLM-XComposer2,@panzhang0212可以看看吗

qinzhenyi1314 commented 2 months ago

遇到了同样的问题,请问有解决吗?

xiayq1 commented 2 months ago

已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下:

def build_vision_tower():

vision_tower = 'openai/clip-vit-large-patch14-336'

vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336'
return CLIPVisionTower(vision_tower)
zzyzeyuan commented 2 months ago

已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下:

def build_vision_tower(): # vision_tower = 'openai/clip-vit-large-patch14-336' vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336' return CLIPVisionTower(vision_tower)

请问一下您这里的'/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336'对应的是本地文件里的哪一个呀?我没有找到。我用的是internlm-xcomposer2-vl-7b

xiayq1 commented 2 months ago

已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下: def build_vision_tower(): # vision_tower = 'openai/clip-vit-large-patch14-336' vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336' return CLIPVisionTower(vision_tower)

请问一下您这里的'/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336'对应的是本地文件里的哪一个呀?我没有找到。我用的是internlm-xcomposer2-vl-7b

要自己去下载这个模型的。文件就是:build_mlp.py。推荐你debug下。

Zking668 commented 2 months ago

用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.

我也是遇到了相同的问题,请问怎么解决? image

isruihu commented 1 month ago

已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下: def build_vision_tower(): # vision_tower = 'openai/clip-vit-large-patch14-336' vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336' return CLIPVisionTower(vision_tower)

请问一下您这里的'/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336'对应的是本地文件里的哪一个呀?我没有找到。我用的是internlm-xcomposer2-vl-7b

要自己去下载这个模型的。文件就是:build_mlp.py。推荐你debug下。

我这里build_mlp.py每次回自动刷新,导致修改了也没用,有遇到这个问题的吗

sssssshf commented 1 month ago

import torch from modelscope import snapshot_download, AutoModel, AutoTokenizer

torch.set_grad_enabled(False)

init model and tokenizer

model_dir = snapshot_download('Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit')

model_dir = 'modelscope/internlm-xcomposer2-vl-7b' print(model_dir)

model_dir = 'Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit'

print(model_dir)

ssss

model = AutoModel.from_pretrained(model_dir, trust_remote_code=True).eval() tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=True) model.tokenizer = tokenizer

text = '有没有人摔倒' image = '1.jpg' with torch.cuda.amp.autocast(): response, _ = model.chat(tokenizer, query=text, image=image, history=[], do_sample=False) print(response)

这张图片是一个引用的奥斯卡·王尔德的名言,它被放在一个美丽的日落背景上。

引用的内容是“Live life with no excuses, travel with no regrets”,意思是“生活不要找借口,旅行不要后悔”。

在日落时分,两个身影站在山丘上,他们似乎正在享受这个美景。整个场景传达出一种积极向上、勇敢追求梦想的情感。

okDe

sssssshf commented 1 month ago

用这个modelscope的代码时 ,用int4版本会出现缺少配置文件的错误 OSError: /home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/shf/.cache/modelscope/hub/Shanghai_AI_Laboratory/internlm-xcomposer2-vl-7b-4bit/None' for available files.

@panzhang0212 作者能解答一下吗

稀里糊涂的就成功了 我用了transformer的推理 也用了modelscope的推理 都成了

shihairuo commented 2 weeks ago

已经解决了这个问题: 原因是下载不了这个clip模型。手动下载下clip。在build_mlp.py里面改下: def build_vision_tower(): # vision_tower = 'openai/clip-vit-large-patch14-336' vision_tower = '/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336' return CLIPVisionTower(vision_tower)

请问一下您这里的对应的是本地文件里的哪一个呀?我没有找到。我用的是internlm-xcomposer2-vl-7b'/InternLM-XComposer-main/internlm/clip-vit-large-patch14-336'

要自己去下载这个模型的。文件就是:build_mlp.py。推荐你debug下。

我这里build_mlp.py每次回自动刷新,导致修改了也没用,有遇到这个问题的吗

我也遇到这个问题,请问解决了吗