mlcommons / storage

MLPerf™ Storage Benchmark Suite
https://mlcommons.org/en/groups/research-storage/
Apache License 2.0
82 stars 28 forks source link

AU doesn't meet expectation #55

Open hanyunfan opened 6 months ago

hanyunfan commented 6 months ago

I am new, just had the storage benchmark run the 1st time ever and got this line:

Training_au_meet_expectation = fail.

My questions are:

  1. Will this block me from submitting for the next round?
  2. Need some help to understand AU better, to my understanding, this is a calculated number, so why does it fail?
  3. How to adjust to make it correct.

image

shp776 commented 6 months ago

Hi. I am also a user who is repeating trial & error by changing parameter values. How many accelerators did you set per host? I could pass the AU standard only when I set 1 accelerator per host...

hanyunfan commented 6 months ago

@shp776 thanks. I set 1 accelerator and run on 1 node, here is my run command:

./benchmark.sh run -s localhost -w unet3d -g h100 -n 1 -r resultsdir -p dataset.num_files_train=1200 -p dataset.data_folder=unet3d_data

hanyunfan commented 6 months ago
image

./benchmark.sh run -s localhost -w unet3d -g h100 -n 1 -r resultsdir -p dataset.num_files_train=1200 -p dataset.data_folder=unet3d_data -p reader.read_threads=16

hanyunfan commented 6 months ago

[METRIC] ========================================================== [METRIC] Training Accelerator Utilization [AU] (%): 99.3492 (0.0111) [METRIC] Training Throughput (samples/second): 20.8836 (0.1170) [METRIC] Training I/O Throughput (MB/second): 2919.7243 (16.3633) [METRIC] train_au_meet_expectation: success [METRIC] ========================================================== ./benchmark.sh run -s localhost -w unet3d -g h100 -n 1 -r resultsdir -p dataset.num_files_train=1200 -p dataset.data_folder=unet3d_data -p reader.read_threads=8

hanyunfan commented 6 months ago

read_threads=6 [METRIC] ========================================================== [METRIC] Training Accelerator Utilization [AU] (%): 99.3732 (0.0089) [METRIC] Training Throughput (samples/second): 20.8935 (0.1140) [METRIC] Training I/O Throughput (MB/second): 2921.1066 (15.9404) [METRIC] train_au_meet_expectation: success [METRIC] ==========================================================

shp776 commented 6 months ago

Hi, @hanyunfan I want to know What value did you set for the below parameter In the second step(datagen).

-n, --num-parallel Number of parallel jobs used to generate the dataset

Thank you!

FileSystemGuy commented 6 months ago

Hi,Since the benchmark score does not include the time it takes to generate the dataset files, you can set that parameter to anything you want to.  I like 16 or 32, for example, because it usually makes the generation phase take less time.Thanks,CurtisOn Mar 24, 2024, at 22:08, shp776 @.***> wrote: Hi, @hanyunfan I want to know What value did you set for the below parameter In the second step(datagen). -n, --num-parallel Number of parallel jobs used to generate the dataset Thank you!

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: @.***>

hanyunfan commented 6 months ago

Hi, @hanyunfan I want to know What value did you set for the below parameter In the second step(datagen).

-n, --num-parallel Number of parallel jobs used to generate the dataset

Thank you!

This number doesn't really matter, you can use the default one, it just opens 8 or 16 parallel threads to process the data.

shp776 commented 6 months ago

@FileSystemGuy , @hanyunfan Thank you very much guys, your advice was very helpful to me. I have one more question.

-m, --client-host-memory-in-gb Memory available in the client where benchmark is run

I want to know if it is right that the above parameter`s value should be as close as possible to my DRAM memory size in order to maximize storage performance. I'm not sure, but I think I saw this in the MLPerf-Storage presentation by Balmau (If Dataset does not fit with memory (ex. Dataset= 2 * system memory), disk access occurs frequently and training time has increased by three times from this.)

From what I tested, the larger the above parameter value, the larger the result (-param dataset.num_files_train) in the datasize stage(step 1.)

Is there anything you can tell me about this? : )

hanyunfan commented 6 months ago

@shp776 looks like that's the design, you should set it to something equal to your testing system memory, if you set larger, more files or larger files will be generated to meet the 5x rule. So, you will see more files, this is common. Finial results for anything larger than 5x memory size should be similar, because they all removed the client cache effect. So, if you set it with a larger value, you only increased your testing time, not the throughput at the end, not seem worth it.

# calculate required minimum samples given host memory to eliminate client-side caching effects https://github.com/mlcommons/storage/blob/88e4f594a3f282be51103d6cebe6d14886b0dd6e/benchmark.sh#L217

$HOST_MEMORY_MULTIPLIER is 5 by default here, so looks like it will generate 5x data. https://github.com/mlcommons/storage/blob/88e4f594a3f282be51103d6cebe6d14886b0dd6e/benchmark.sh#L51