InternLM / xtuner

An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
https://xtuner.readthedocs.io/zh-cn/latest/
Apache License 2.0
3.69k stars 298 forks source link

LLaVA-Phi3 invalid JSON file. #826

Open Mikael17125 opened 1 month ago

Mikael17125 commented 1 month ago

hi, can I knew the exact syntax, mine is still error:

model = dict(
    freeze_llm=True,
    freeze_visual_encoder=True,
    llm=dict(
        attn_implementation='eager',
        pretrained_model_name_or_path='/home/oem/xtuner/pretrained/phi-3/',
        type='transformers.AutoModelForCausalLM.from_pretrained',
        trust_remote_code=True),
    pretrained_pth='/home/oem/xtuner/pretrained/phi-3/',
    type='xtuner.model.LLaVAModel',
    visual_encoder=dict(
        type=CLIPVisionModel.from_pretrained,
        pretrained_model_name_or_path=visual_encoder_name_or_path))

it said I have no config.json

Traceback (most recent call last):
  File "/home/oem/xtuner/xtuner/tools/train.py", line 360, in <module>
    main()
  File "/home/oem/xtuner/xtuner/tools/train.py", line 356, in main
    runner.train()
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/runner/_flexible_runner.py", line 1182, in train
    self.strategy.prepare(
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/_strategy/deepspeed.py", line 381, in prepare
    model = self.build_model(model)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/_strategy/base.py", line 306, in build_model
    model = MODELS.build(model)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
    obj = obj_cls(**args)  # type: ignore
  File "/home/oem/xtuner/xtuner/model/llava.py", line 61, in __init__
    llm = self._dispatch_lm_model_cfg(llm, max_position_embeddings)
  File "/home/oem/xtuner/xtuner/model/llava.py", line 269, in _dispatch_lm_model_cfg
    llm_cfg = AutoConfig.from_pretrained(
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 965, in from_pretrained
    config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/configuration_utils.py", line 632, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/configuration_utils.py", line 689, in _get_config_dict
    resolved_config_file = cached_file(
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/utils/hub.py", line 373, in cached_file
    raise EnvironmentError(
OSError: /home/oem/xtuner/pretrained/phi-3/ does not appear to have a file named config.json. Checkout 'https://huggingface.co//home/oem/xtuner/pretrained/phi-3//tree/None' for available files.

even when I specified the .pth it still give me error:

model = dict(
    freeze_llm=True,
    freeze_visual_encoder=True,
    llm=dict(
        attn_implementation='eager',
        pretrained_model_name_or_path='/home/oem/xtuner/pretrained/phi-3/model.pth',
        type='transformers.AutoModelForCausalLM.from_pretrained',
        trust_remote_code=True),
    pretrained_pth='/home/oem/xtuner/pretrained/phi-3/model.pth',
    type='xtuner.model.LLaVAModel',
    visual_encoder=dict(
        type=CLIPVisionModel.from_pretrained,
        pretrained_model_name_or_path=visual_encoder_name_or_path))
Traceback (most recent call last):
  File "/home/oem/xtuner/xtuner/tools/train.py", line 360, in <module>
    main()
  File "/home/oem/xtuner/xtuner/tools/train.py", line 356, in main
    runner.train()
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/runner/_flexible_runner.py", line 1182, in train
    self.strategy.prepare(
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/_strategy/deepspeed.py", line 381, in prepare
    model = self.build_model(model)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/_strategy/base.py", line 306, in build_model
    model = MODELS.build(model)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/registry/registry.py", line 570, in build
    return self.build_func(cfg, *args, **kwargs, registry=self)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 232, in build_model_from_cfg
    return build_from_cfg(cfg, registry, default_args)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
    obj = obj_cls(**args)  # type: ignore
  File "/home/oem/xtuner/xtuner/model/llava.py", line 61, in __init__
    llm = self._dispatch_lm_model_cfg(llm, max_position_embeddings)
  File "/home/oem/xtuner/xtuner/model/llava.py", line 269, in _dispatch_lm_model_cfg
    llm_cfg = AutoConfig.from_pretrained(
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 965, in from_pretrained
    config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/configuration_utils.py", line 632, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/configuration_utils.py", line 726, in _get_config_dict
    raise EnvironmentError(
OSError: It looks like the config file at '/home/oem/xtuner/pretrained/phi-3/model.pth' is not a valid JSON file.
Traceback (most recent call last):
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/configuration_utils.py", line 722, in _get_config_dict
    config_dict = cls._dict_from_json_file(resolved_config_file)
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/site-packages/transformers/configuration_utils.py", line 825, in _dict_from_json_file
    text = reader.read()
  File "/home/oem/anaconda3/envs/xtuner-env/lib/python3.10/codecs.py", line 322, in decode
    (result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
hhaAndroid commented 1 month ago
model = dict(
    freeze_llm=True,
    freeze_visual_encoder=True,
    llm=dict(
        attn_implementation='eager',
        pretrained_model_name_or_path='microsoft/Phi-3-mini-4k-instruct', # ---------------
        type='transformers.AutoModelForCausalLM.from_pretrained',
        trust_remote_code=True),
    pretrained_pth='/home/oem/xtuner/pretrained/phi-3/model.pth',
    type='xtuner.model.LLaVAModel',
    visual_encoder=dict(
        type=CLIPVisionModel.from_pretrained,
        pretrained_model_name_or_path=visual_encoder_name_or_path))
Mikael17125 commented 1 month ago

it gives me an error like this:

The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
The `seen_tokens` attribute is deprecated and will be removed in v4.41. Use the `cache_position` model input instead.