-
## official intro
* https://github.com/mlperf/inference_results_v0.5/tree/master/closed/NVIDIA/code/ssd-large/tensorrt
-
I am trying to run ASP toy_problem.py. It seems nothing changes.
Is there any method for seeing performance gain?
I am comparing `train_loop/arg.num_xxx_steps` for dense and sparse.
It seems few…
-
When I run the command
cm run script --tags=generate-run-cmds,inference,_find-performance,_all-scenarios --model=bert-99 --implementation=reference --device=cuda --backend=onnxruntime --category=edg…
-
I would like to run the `ggml/gpt-j` version on the MLPerf benchmark. Is it possible to use a fine-tuned GPT-J checkpoint listed here: https://github.com/mlcommons/inference/blob/master/language/gpt-j…
-
please add the steps on when and how to use
https://github.com/mlperf/inference/blob/master/tools/submission/truncate_accuracy_log.py
-
**Command:** cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=bert-99 --implementation=reference --backend=pytorch --device=cuda --sce…
-
When importing [3dunet_kits19_1x1x128x128x128.tflite](https://storage.googleapis.com/iree-model-artifacts/3dunet_kits19_1x1x128x128x128.tflite) to mlir using `import-iree-tflite`, I get the error:
…
-
I am running MLPerf Inference datacenter suite on a CPU only device following the instructions on the [documentation](https://docs.mlcommons.org/inference/benchmarks/language/llama2-70b/).
The sug…
-
I installed CM following the guide in https://docs.mlcommons.org/ck/install/ successfully
and then refer to https://docs.mlcommons.org/inference/benchmarks/language/bert/ to run the scripts as belo…
-
Hello,
when downloading the processed dataset for llama2-70b with rclone: as specified on the file "language/llama2-70b/README.md" on the "get dataset section" I noticed the file "mlperf_log_accuracy…