issues
search
mlcommons
/
inference_results_v1.1
This repository contains the results and code for the MLPerf™ Inference v1.1 benchmark.
https://mlcommons.org/en/inference-datacenter-11/
Apache License 2.0
11
stars
23
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Bump onnx from 1.9.0 to 1.15.0 in /closed/FuriosaAI/code/quantization/mlperf_evaluation
#16
dependabot[bot]
opened
7 months ago
1
Bump onnx from 1.9.0 to 1.15.0 in /closed/Alibaba/scripts
#15
dependabot[bot]
opened
7 months ago
1
Bump onnx from 1.8.1 to 1.15.0 in /closed/FuriosaAI/code/quantization
#14
dependabot[bot]
opened
7 months ago
1
Can not run "make build" on Xavier AGX
#13
JoachimMoe
opened
1 year ago
0
DLRM 99.9 performance and accuracy runs getting stuck on Xeon Icelake CPU
#12
lvaidya2910
opened
1 year ago
0
RuntimeError: module compiled against API version 0xf but this version of numpy is 0xd
#11
yuyinsl
opened
2 years ago
0
nvidia-smi: command not found
#10
yuyinsl
opened
2 years ago
0
Add NVIDIA third-party license header
#9
nvzhihanj
opened
2 years ago
2
Why is the difference between the offline performance and the single stream performance for RNNT so big?
#8
vid2022
opened
2 years ago
0
hi,i use the rtx 3070 to make build :"Cannot find valid configs for 1x NVIDIA GeForce RTX 3070. Please follow performance_tuning_guide.md to add support for a new GPU."
#7
zcuuu
opened
2 years ago
2
Install tenosrrt for python3.8 in Jetson Xavier NX
#6
zhr01
opened
2 years ago
9
Xavier Failed to compile Pytorch 1.4
#5
yzh89
opened
2 years ago
3
removing extraneous files
#4
georgelyuan
closed
3 years ago
1
Update A100-PCIe-80GB_aarch64x4_TRT.json
#3
georgelyuan
closed
3 years ago
2
fixed GCP and AWS cloud instances description
#2
gfursin
closed
3 years ago
1
Intel open-track DLRM patch
#1
keithachorn-intel
closed
3 years ago
1