-
Intel MLPerf inference runs are failing for R50 and bert as shown [here](https://github.com/GATEOverflow/cm4mlops/actions/runs/11829661024/job/32961852185)
-
# 运行 mlperf-inference v3.0 的 dlrm 多卡测试 · wu-kan
尝试跑了一下 mlperf,发现文档写的有亿点点烂,仿佛所有提交结果的厂商都不打算让别人跑通他们的代码。
[https://wu-kan.cn/2023/07/07/mlperf-inference-dlrm/](https://wu-kan.cn/2023/07/07/mlperf-inferen…
-
Run the below cm commands for several times and always failed at the same place:
(cm) tomcat@tomcat-Dove-Product:~$ cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev \
…
-
For reproduction.
Input Model:
https://sharkpublic.blob.core.windows.net/sharkpublic/sai/sdxl-punet/punet.mlir
Input data :
wget https://sharkpublic.blob.core.windows.net/sharkpublic/sai/sdx…
-
Trying to run offline retinanet in a container with one Nvidia GPU:
cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1-dev --model=retinanet --implementation=nvidia …
-
Some edge systems may not be connected to internet and we need a way to run mlperf inference benchmarks on them using CM.
-
It'll be good to fix the compilation warnings happening for loadgen.
```
-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI…
-
We need to update the MLPerf inference docs for native CUDA runs
1. Add a remark that unless CUDA, cuDNN and TensorRT are available in the environment it is recommended to use the docker option
2. I…
-
There should be a defined process how to sync sub-branches in this repo to avoid merge conflicts for people working on different projects.
We have the following hierarchy of branches in this repo:…