mlcommons / inference_results_v3.1

This repository contains the results and code for the MLPerf™ Inference v3.1 benchmark.
https://mlcommons.org/benchmarks/inference-datacenter/
Apache License 2.0
11 stars 13 forks source link

File missing in closed/NVIDIA #7

Closed alice890308 closed 10 months ago

alice890308 commented 1 year ago

Hi! I’m trying to run MLPerf on my A100 GPU with MIG mode. I use the implementation of Nvidia and follow the document. However, I encounter some questions…

  1. I hope to run these benchmarks on MIG slices as described on this page, but I didn’t find scripts/launch_heterogeneous_mig.py in the repo.

  2. This implementation uses benchmark configuration for each benchmark, for example, the server mode of resnet50. I wonder if you have a document to introduce each field with more detail? It can help me understand the meaning of every field and adjust them for my experiment.

nv-ananjappa commented 11 months ago

@alice890308 The 3.1 code of NVIDIA does not officially support running on A100 MIG. You can try some of our older submissions' code that supports MIG.