-
When looking at Inference/Edge/Open Division results, e.g.
https://mlperf.org/inference-results-0-7/
It seems unlikely that accuracy was maintained across the different line items (e.g. going fro…
-
Have references to Python2 and python2.7 which runs into multiple issues.
Once I resolve that, I ran into famous issue of cannot find cublas_v2.h file although the file exists in 4 different loca…
-
For reproducibility we would request NVIDIA to switch the `BASE_IMAGE` repo links from internal Gitlab links to publicly available repo links. In the `Makefile.docker`, the `BASE_IMAGE` URL is from an…
-
In the MLPerf Inference v2.1 round, Qualcomm and their partners submitted a number of RetinaNet results, which the Review Committee eventually accepted to the Open division under the Preview category.…
-
Hi, the rules show that min duration is 600 for all workloads (I was looking at datacenter) while it shall be 60 for most of them.
https://github.com/mlcommons/inference_policies/blob/master/inferenc…
-
-
Hello, I hope you are doing well.
I intend to run resnet50 in the server scenario (datacenter) using the script in the docs:
```
cm run script --tags=run-mlperf,inference,_r4.1-dev \
--mode…
-
During the MLPerf Inference v1.0 round, I noticed that the power workflow when used with CPU inference _occasionally_ seemed to incur a rather high overhead (~10%), for example:
- Xavier with power m…
-
Please see below for the detailed output. The run is done on Nvidia RTX 4090 GPU.
```
CMD: /home/arjun/cm/bin/python3 main.py --scenario SingleStream --profile stable-diffusion-xl-pytorch --datas…
-
Add reference code for `mixtral-8x7b`(https://github.com/mlcommons/inference/tree/master/language/mixtral-8x7b) in `axs`.
To do the following steps:
- add recipe for downloading dataset
- add recip…