I am running MLPerf Inference datacenter suite on a CPU only device following the instructions on the documentation.
The suggested sample size/query counts seem to take a very long time to reach completion.
Would it be possible to query intermediate results (such as throughput) when the benchmark is executing?
How are the sample sizes correlated with the accuracy of results? For instance, does llama2 CPU run need the same sample count (24576) as GPU? This is suggested here
I see the following prints on my terminal, but I am not sure how to interpret these results:
You can do --execution_mode=test - -test_query_count=100 and get a quick result. But this won't be accepted as official one.
Yes, minimum 24576 inputs need to be run for llama2. Accuracy value also changes if we run lower number of inputs. For this reason, no one has tried to do a llama2-70b submission on CPUs.
I am running MLPerf Inference datacenter suite on a CPU only device following the instructions on the documentation.
The suggested sample size/query counts seem to take a very long time to reach completion.
I see the following prints on my terminal, but I am not sure how to interpret these results: