mlcommons / inference

Reference implementations of MLPerf™ inference benchmarks
https://mlcommons.org/en/groups/inference
Apache License 2.0
1.24k stars 536 forks source link

Querying intermediate results #1839

Open rajesh-s opened 3 months ago

rajesh-s commented 3 months ago

I am running MLPerf Inference datacenter suite on a CPU only device following the instructions on the documentation.

The suggested sample size/query counts seem to take a very long time to reach completion.

  1. Would it be possible to query intermediate results (such as throughput) when the benchmark is executing?
  2. How are the sample sizes correlated with the accuracy of results? For instance, does llama2 CPU run need the same sample count (24576) as GPU? This is suggested here

I see the following prints on my terminal, but I am not sure how to interpret these results:

image
arjunsuresh commented 3 months ago
  1. You can do --execution_mode=test - -test_query_count=100 and get a quick result. But this won't be accepted as official one.
  2. Yes, minimum 24576 inputs need to be run for llama2. Accuracy value also changes if we run lower number of inputs. For this reason, no one has tried to do a llama2-70b submission on CPUs.