huggingface / optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Apache License 2.0
143 stars 175 forks source link

Llama 3.1 Support -- Rope_scaling issue #1154

Open AzeezIsh opened 1 month ago

AzeezIsh commented 1 month ago

System Info

I was reading that Gaudi supports Llama 3.1 -- https://www.intel.com/content/www/us/en/developer/articles/technical/intel-ai-solutions-support-meta-llama-3-1-launch.html

Although when I put the HF model path in run_generation, I'm getting errors, here is the traceback:
Traceback (most recent call last):
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/text-generation/text-generation-pipeline/run_pipeline.py", line 70, in <module>
    main()
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/text-generation/text-generation-pipeline/run_pipeline.py", line 51, in main
    pipe = GaudiTextGenerationPipeline(args, logger)
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/text-generation/text-generation-pipeline/pipeline.py", line 15, in __init__
    self.model, self.tokenizer, self.generation_config = initialize_model(args, logger)
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/text-generation/utils.py", line 509, in initialize_model
    setup_model(args, model_dtype, model_kwargs, logger)
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/text-generation/utils.py", line 214, in setup_model
    model = AutoModelForCausalLM.from_pretrained(
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/.venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 523, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/.venv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 952, in from_pretrained
    return config_class.from_dict(config_dict, **unused_kwargs)
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/.venv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 761, in from_dict
    config = cls(**config_dict)
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/.venv/lib/python3.10/site-packages/optimum/habana/transformers/models/llama/configuration_llama.py", line 32, in __init__
    super().__init__(
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/.venv/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 161, in __init__
    self._rope_scaling_validation()
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/.venv/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 181, in _rope_scaling_validation
    raise ValueError(
ValueError: `rope_scaling` must be a dictionary with two fields, `type` and `factor`, got {'factor': 8.0, 'low_freq_factor': 1.0, 'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}

Information

Tasks

Reproduction

cmd = f'python3 /home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/text-generation/text-generation-pipeline/run_pipeline.py \ --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct \ --max_new_tokens 750 --temperature 0.05 --do_sample --n_iterations 1 \ --use_hpu_graphs --use_kv_cache --bf16 --prompt "{prompt}" '

Expected behavior

Expected to call run_generation with a standard base model, would see the output, throughput, and time to generation.

regisss commented 1 month ago

We'll soon upgrade to Transformers v4.43 in Optimum Habana, which will solve this issue.

AzeezIsh commented 1 month ago

Thanks for the quick update! Did you have a rough idea as to when you would upgrade? Curious because I am working on deploying for my AIOps team at Intel, and upgrading from Llama 3 --> Llama 3.1 would bring us plenty of accuracy benefits!

regisss commented 1 month ago

The PR is open here: https://github.com/huggingface/optimum-habana/pull/1163

regisss commented 1 month ago

@AzeezIsh #1163 has been merged, it should now work if you install the lib from source

AzeezIsh commented 1 month ago

@AzeezIsh #1163 has been merged, it should now work if you install the lib from source

Still getting an error on rope_scaling, redownloaded optimum-habana, everything is updated to 1.17. The traceback is almost the exact same still, am also sharing the run command just in case:

python3 run_lora_clm.py \ --overwrite_output_dir=True \ --model_name_or_path meta-llama/Meta-Llama-3.1-8B-Instruct \ --dataset_name AzeezIsh/large_oneapi \ --bf16 True \ --output_dir ~/Gaudi-tutorials/PyTorch/Single_card_tutorials/large_lora_llama3.1

Traceback (most recent call last):
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/language-modeling/run_lora_clm.py", line 897, in <module>
    main()
  File "/home/ubuntu/Gaudi-tutorials/PyTorch/Single_card_tutorials/optimum-habana/examples/language-modeling/run_lora_clm.py", line 475, in main
    config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
  File "/home/ubuntu/habanalabs-venv/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 952, in from_pretrained
    return config_class.from_dict(config_dict, **unused_kwargs)
  File "/home/ubuntu/habanalabs-venv/lib/python3.10/site-packages/transformers/configuration_utils.py", line 761, in from_dict
    config = cls(**config_dict)
  File "/home/ubuntu/habanalabs-venv/lib/python3.10/site-packages/optimum/habana/transformers/models/llama/configuration_llama.py", line 31,
  in __init__
    super().__init__(
  File "/home/ubuntu/habanalabs-venv/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 161, in __init__
    self._rope_scaling_validation()
  File "/home/ubuntu/habanalabs-venv/lib/python3.10/site-packages/transformers/models/llama/configuration_llama.py", line 181, in _rope_scaling_validation
    raise ValueError(
ValueError: `rope_scaling` must be a dictionary with two fields, `type` and `factor`, got {'factor': 8.0, 'low_freq_factor': 1.0, 'high_freq_factor': 4.0, 'original_max_position_embeddings': 8192, 'rope_type': 'llama3'}