openshift-psap / topsail

Test Orchestrator for Performance and Scalability of AI pLatforms
Apache License 2.0
11 stars 16 forks source link

Gate QLoRA preset. #583

Closed albertoperdomo2 closed 1 day ago

openshift-ci[bot] commented 1 week ago

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign dagrayvid for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files: - **[OWNERS](https://github.com/openshift-psap/topsail/blob/main/OWNERS)** Approvers can indicate their approval by writing `/approve` in a comment Approvers can cancel approval by writing `/approve cancel` in a comment
topsail-bot[bot] commented 1 week ago

Jenkins Job #1609

:red_circle: Test of 'rhoai test test_ci' failed after 07 hours 13 minutes 05 seconds. :red_circle:

• Link to the test results.

• Link to the reports index.

Test configuration:

# RHOAI: run fine_tuning test test_ci
PR_POSITIONAL_ARGS: gating_qlora_models
PR_POSITIONAL_ARG_0: fine_tuning-perf-ci
PR_POSITIONAL_ARG_1: gating_qlora_models

• Link to the Rebuild page.

[Failure indicator](https://ci.app-svc-perf.corp.redhat.com/job/ExternalTeams/job/RHODS/job/topsail/1609/artifact/run/f23-h33-000-6018r.rdu2.scalelab.redhat.com//000_test_ci/FAILURES/view/):

/logs/artifacts/000_test_ci/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
/logs/artifacts/000_test_ci/000__matbenchmarking/qlora/005__qlora/002__test_fine_tuning/000__fine_tuning__run_fine_tuning_job/FAILURE | [000__fine_tuning__run_fine_tuning_job] ./run_toolbox.py from_config fine_tuning run_fine_tuning_job --extra={'name': 'qlora', 'model_name': 'llama-3.1-405b-gptq', 'dataset_name': 'alpaca_data.json', 'gpu': 8, 'dataset_replication': 0.2, 'hyper_parameters': {'fp16': True, 'gradient_accumulation_steps': 4, 'gradient_checkpointing': True, 'lora_alpha': 16, 'max_seq_length': 1024, 'max_steps': -1, 'num_train_epochs': 1, 'packing': False, 'peft_method': 'lora', 'per_device_train_batch_size': 1, 'r': 4, 'torch_dtype': 'float16', 'use_flash_attn': True, 'warmup_ratio': 0.03, 'auto_gptq': ['triton_v2'], 'target_modules': ['all-linear']}, 'dataset_transform': 'convert_alpaca.py'} --> 2
/logs/artifacts/000_test_ci/000__matbenchmarking/qlora/005__qlora/002__test_fine_tuning/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/logs/artifacts/000_test_ci/000__matbenchmarking/qlora/005__qlora/002__test_fine_tuning" ./run_toolbox.py from_config fine_tuning run_fine_tuning_job --extra="{'name': 'qlora', 'model_name': 'llama-3.1-405b-gptq', 'dataset_name': 'alpaca_data.json', 'gpu': 8, 'dataset_replication': 0.2, 'hyper_parameters': {'fp16': True, 'gradient_accumulation_steps': 4, 'gradient_checkpointing': True, 'lora_alpha': 16, 'max_seq_length': 1024, 'max_steps': -1, 'num_train_epochs': 1, 'packing': False, 'peft_method': 'lora', 'per_device_train_batch_size': 1, 'r': 4, 'torch_dtype': 'float16', 'use_flash_attn': True, 'warmup_ratio': 0.03, 'auto_gptq': ['triton_v2'], 'target_modules': ['all-linear']}, 'dataset_transform': 'convert_alpaca.py'}"' returned non-zero exit status 2.
Traceback (most recent call last):
  File "/opt/topsail/src/projects/fine_tuning/testing/test_finetuning.py", line 126, in _run_test
    run.run_toolbox_from_config("fine_tuning", "run_fine_tuning_job",
  File "/opt/topsail/src/projects/core/library/run.py", line 49, in run_toolbox_from_config
    return run(f'{cmd_env} ./run_toolbox.py from_config {group} {command} {_dict_to_run_toolbox_args(kwargs)}', **run_kwargs)
  File "/opt/topsail/src/projects/core/library/run.py", line 105, in run
    proc = subprocess.run(command, **args)

[...]

[Test ran on the internal Perflab CI]

kpouget commented 1 week ago

hey @albertoperdomo2 , this looks quite good ! only the 405b model failed, which isn't unexpected given the size of the model ! I'm not exactly sure what happens with the upload when an invalid test is hit. Its LTS payload can't be constructed, so it cannot be uploaded, but I don't remember if that blocks all the uploads or not :/

Long story short, can you disable this 405b model for the time being, until you can validate outside of CPT that it works, then rerun the test over the night/weekend ?

albertoperdomo2 commented 1 week ago

@kpouget I'll launch it tonight so it does not block the cluster for the day 👍🏽

topsail-bot[bot] commented 5 days ago

Jenkins Job #1631

:red_circle: Test of 'rhoai test test_ci' failed after 00 hours 25 minutes 23 seconds. :red_circle:

• Link to the test results.

• Link to the reports index.

Test configuration:

# RHOAI: run fine_tuning test test_ci
PR_POSITIONAL_ARGS: gating_qlora_models
PR_POSITIONAL_ARG_0: fine_tuning-perf-ci
PR_POSITIONAL_ARG_1: gating_qlora_models

• Link to the Rebuild page.

[Failure indicator](https://ci.app-svc-perf.corp.redhat.com/job/ExternalTeams/job/RHODS/job/topsail/1631/artifact/run/f23-h33-000-6018r.rdu2.scalelab.redhat.com//000_test_ci/FAILURES/view/):

/logs/artifacts/000_test_ci/000__matbenchmarking/FAILURE | MatrixBenchmark benchmark failed.
/logs/artifacts/000_test_ci/000__matbenchmarking/qlora/000__qlora/002__fms_fine_tuning/000__fine_tuning__run_fine_tuning_job/FAILURE | [000__fine_tuning__run_fine_tuning_job] ./run_toolbox.py from_config fine_tuning run_fine_tuning_job --extra={'name': 'qlora', 'model_name': 'mistral-7b-v0.3-gptq', 'dataset_name': 'alpaca_data.json', 'gpu': 8, 'dataset_replication': 0.2, 'hyper_parameters': {'fp16': True, 'gradient_accumulation_steps': 4, 'gradient_checkpointing': True, 'lora_alpha': 16, 'max_seq_length': 1024, 'max_steps': -1, 'num_train_epochs': 1, 'packing': False, 'peft_method': 'lora', 'per_device_train_batch_size': 1, 'r': 4, 'torch_dtype': 'float16', 'use_flash_attn': True, 'warmup_ratio': 0.03, 'auto_gptq': ['triton_v2'], 'target_modules': ['all-linear']}, 'dataset_transform': 'convert_alpaca.py'} --> 2
/logs/artifacts/000_test_ci/000__matbenchmarking/qlora/000__qlora/002__fms_fine_tuning/FAILURE | CalledProcessError: Command 'set -o errexit;set -o pipefail;set -o nounset;set -o errtrace;ARTIFACT_DIR="/logs/artifacts/000_test_ci/000__matbenchmarking/qlora/000__qlora/002__fms_fine_tuning" ./run_toolbox.py from_config fine_tuning run_fine_tuning_job --extra="{'name': 'qlora', 'model_name': 'mistral-7b-v0.3-gptq', 'dataset_name': 'alpaca_data.json', 'gpu': 8, 'dataset_replication': 0.2, 'hyper_parameters': {'fp16': True, 'gradient_accumulation_steps': 4, 'gradient_checkpointing': True, 'lora_alpha': 16, 'max_seq_length': 1024, 'max_steps': -1, 'num_train_epochs': 1, 'packing': False, 'peft_method': 'lora', 'per_device_train_batch_size': 1, 'r': 4, 'torch_dtype': 'float16', 'use_flash_attn': True, 'warmup_ratio': 0.03, 'auto_gptq': ['triton_v2'], 'target_modules': ['all-linear']}, 'dataset_transform': 'convert_alpaca.py'}"' returned non-zero exit status 2.
Traceback (most recent call last):
  File "/opt/topsail/src/projects/fine_tuning/testing/test_finetuning.py", line 136, in _run_test
    run.run_toolbox_from_config("fine_tuning", "run_fine_tuning_job",
  File "/opt/topsail/src/projects/core/library/run.py", line 49, in run_toolbox_from_config
    return run(f'{cmd_env} ./run_toolbox.py from_config {group} {command} {_dict_to_run_toolbox_args(kwargs)}', **run_kwargs)
  File "/opt/topsail/src/projects/core/library/run.py", line 105, in run
    proc = subprocess.run(command, **args)

[...]

[Test ran on the internal Perflab CI]

kpouget commented 5 days ago

test failed because I had a stray ray job running. We may want to consider including a Ray cleanup in the code (along with the PyTorchJob cleanup)

kpouget commented 5 days ago

I'll relaunch the job

topsail-bot[bot] commented 4 days ago

Jenkins Job #1632

:green_circle: Test of 'rhoai test test_ci' succeeded after 06 hours 01 minutes 36 seconds. :green_circle:

• Link to the test results.

• Link to the reports index.

Test configuration:

# RHOAI: run fine_tuning test test_ci
PR_POSITIONAL_ARGS: gating_qlora_models
PR_POSITIONAL_ARG_0: fine_tuning-perf-ci
PR_POSITIONAL_ARG_1: gating_qlora_models

• Link to the Rebuild page.

[Test ran on the internal Perflab CI]

kpouget commented 1 day ago

thanks @albertoperdomo2 , merging