mlcommons / training

Reference implementations of MLPerf™ training benchmarks
https://mlcommons.org/en/groups/training
Apache License 2.0
1.62k stars 561 forks source link

fix llama2_70b_lora broken link for Accelerate config file in the readme #766

Closed hiwotadese closed 1 month ago

github-actions[bot] commented 2 months ago

MLCommons CLA bot:
Thank you very much for your submission, we really appreciate it. Before we can accept your contribution, we ask that you sign the MLCommons CLA (Apache 2). Please use this [Google form] (https://forms.gle/Ew1KkBVpyeJDuRw67) to initiate authorization. If you are from an MLCommons member organization, we will request that you be added to the CLA. If you are not from a member organization, we will email you a CLA to sign. For any questions, please contact support@mlcommons.org.
0 out of 1 committers have signed the MLCommons CLA.
:x: @Hiwot Kassa
Hiwot Kassa seems not to be a GitHub user. You need a GitHub account after you become MLCommons member. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request

ShriyaPalsamudram commented 2 months ago

@hiwotadese can you also set gradient_clipping: 0.3 in the config? Details are in https://github.com/mlcommons/training/issues/765

hiwotadese commented 2 months ago

@ShriyaPalsamudram the config file here https://github.com/mlcommons/training/blob/master/llama2_70b_lora/configs/default_config.yaml already have the gradient_clipping: 0.3

hiwotadese commented 1 month ago

recheck