Closed Cemberk closed 1 week ago
========= 24 passed, 45 skipped, 117 deselected, 13 warnings in 6.59s ==========
reports/huggingface_unit_testsrun_models_gpu_models/auto/stats.txt
==== 1 failed, 16 passed, 48 skipped, 74 deselected, 15 warnings in 25.40s =====
reports/huggingface_unit_tests__run_models_gpu_models/altclip/stats.txt
========= 67 passed, 137 skipped, 123 deselected, 13 warnings in 8.36s =========
reports/huggingface_unit_testsrun_models_gpu_models/align/stats.txt
======== 62 passed, 143 skipped, 141 deselected, 12 warnings in 36.42s =========
reports/huggingface_unit_tests__run_models_gpu_models/albert/stats.txt
======== 41 passed, 105 skipped, 240 deselected, 13 warnings in 12.37s =========
### Performance Test Results for upstream_sync_test_2
python examples/pytorch/language-modeling/run_mlm.py --model_name_or_path bert-base-uncased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-mlm --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --max_steps 500
========= 24 passed, 45 skipped, 117 deselected, 13 warnings in 6.63s ==========
reports/huggingface_unit_testsrun_models_gpu_models/auto/stats.txt
==== 1 failed, 16 passed, 48 skipped, 74 deselected, 15 warnings in 22.72s =====
reports/huggingface_unit_tests__run_models_gpu_models/altclip/stats.txt
========= 67 passed, 137 skipped, 123 deselected, 13 warnings in 8.25s =========
reports/huggingface_unit_testsrun_models_gpu_models/align/stats.txt
======== 62 passed, 143 skipped, 141 deselected, 12 warnings in 36.90s =========
reports/huggingface_unit_tests__run_models_gpu_models/albert/stats.txt
======== 41 passed, 105 skipped, 240 deselected, 13 warnings in 12.42s =========
### Performance Test Results for upstream_sync_test_2
python examples/pytorch/language-modeling/run_mlm.py --model_name_or_path bert-base-uncased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-mlm --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --max_steps 500
============================== 1 warning in 0.08s ==============================
reports/huggingface_unit_testsrun_models_gpu_models/audio_spectrogram_transformer/stats.txt
========= 24 passed, 45 skipped, 117 deselected, 13 warnings in 6.44s ==========
reports/huggingface_unit_tests__run_models_gpu_models/altclip/stats.txt
========= 67 passed, 137 skipped, 123 deselected, 13 warnings in 7.99s =========
reports/huggingface_unit_testsrun_models_gpu_models/align/stats.txt
======== 62 passed, 143 skipped, 141 deselected, 12 warnings in 14.30s =========
reports/huggingface_unit_tests__run_models_gpu_models/albert/stats.txt
======== 41 passed, 105 skipped, 240 deselected, 13 warnings in 12.25s =========
### Performance Test Results for Automated PR tmp-develop_test_2-20240919
python examples/pytorch/language-modeling/run_mlm.py --model_name_or_path bert-base-uncased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-mlm --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --max_steps 500
============================== 1 warning in 0.01s ==============================
reports/huggingface_unit_testsrun_models_gpu_models/audio_spectrogram_transformer/stats.txt
========= 24 passed, 45 skipped, 117 deselected, 13 warnings in 6.34s ==========
reports/huggingface_unit_tests__run_models_gpu_models/altclip/stats.txt
========= 67 passed, 137 skipped, 123 deselected, 13 warnings in 7.87s =========
reports/huggingface_unit_testsrun_models_gpu_models/align/stats.txt
======== 62 passed, 143 skipped, 141 deselected, 12 warnings in 14.14s =========
reports/huggingface_unit_tests__run_models_gpu_models/albert/stats.txt
======== 41 passed, 105 skipped, 240 deselected, 13 warnings in 11.30s =========
### Performance Test Results for tmp-develop_test_2-20240919
python examples/pytorch/language-modeling/run_mlm.py --model_name_or_path bert-base-uncased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-mlm --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --max_steps 500
============================= 2 warnings in 0.01s ==============================
reports/huggingface_unit_testsrun_models_gpu_models/audio_spectrogram_transformer/stats.txt
========= 21 passed, 41 skipped, 117 deselected, 14 warnings in 6.35s ==========
reports/huggingface_unit_tests__run_models_gpu_models/altclip/stats.txt
========= 58 passed, 125 skipped, 123 deselected, 14 warnings in 7.86s =========
reports/huggingface_unit_testsrun_models_gpu_models/align/stats.txt
======== 53 passed, 131 skipped, 141 deselected, 13 warnings in 14.30s =========
reports/huggingface_unit_tests__run_models_gpu_models/albert/stats.txt
======== 38 passed, 101 skipped, 236 deselected, 13 warnings in 11.87s =========
### Performance Test Results for rocm6.3_testing_rel4.43_test_2
python examples/pytorch/language-modeling/run_mlm.py --model_name_or_path bert-base-uncased --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-mlm --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --max_steps 500
proceed with rebase
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be problems on the next run.