-
Hi I am running into this issue. Are there any ways that I can run the SDXL benchmark on an Intel cpu?
```
(mlperf) ziw081@mlperf-inference-ziw081-x86-64-10725:/work$ make run_harness RUN_ARGS=…
-
[Single-Shot Detector (SSD)](https://towardsdatascience.com/understanding-ssd-multibox-real-time-object-detection-in-deep-learning-495ef744fab) is a popular approach for object detection. It can be pa…
-
I am to product `int4_offline` with instructions as blows:
**build loadgen** it seems successful.
```shell
git clone --recurse-submodules https://github.com/mlperf/inference.git mlperf_inference
…
-
According to https://github.com/mlcommons/inference/blob/master/Submission_Guidelines.md#expected-time-to-do-benchmark-runs
There is no constraint on the model used also except that the model must…
-
I'm trying to load a resnet50 model with quantize_int8 using calibration data, but getting the following error: `LLVM ERROR: Expected to find GEMM, convolution, or attention op, and didn't`
The erro…
-
Clone source code from below link: https://github.com/mlperf/inference_results_v0.5/tree/master/closed/Intel/code/ssd-small/openvino-windows
List LNK2019: unresolved external symbol too much on th…
-
I am trying to run "MlPref /speech_recognition " in my CPU. But when I run the ./run.sh file from this link "https://github.com/mlcommons/inference/tree/r1.1/speech_recognition/rnnt" , it creates few …
-
The [RNN-T CmdGen](https://github.com/ctuning/ck-mlperf/tree/master/cmdgen/benchmark.speech-recognition-loadgen/.cm) is work-in-progress. We started it for the v0.7 submission round, but eventually di…
-
Hello,
Nvidia MLPerf suggests to use [TensorRT](https://github.com/NVIDIA/TensorRT) framework for a performant inference deployment. For DLRM (DL based Recommendation Systems) inference on GPU, I h…
-
I try to achive the result in Nvidia Jetson Xavier NX. After setup up the environment, I got the fllowing error message:
```
Makefile:236: *** MLPerf Inference v1.1 code requires NVIDIA Driver Ve…
zhr01 updated
2 years ago