mlcommons / inference

Reference implementations of MLPerf™ inference benchmarks
https://mlcommons.org/en/groups/inference
Apache License 2.0
1.17k stars 517 forks source link

CM error: no scripts were found with above tags and variations #1709

Open sunpian1 opened 3 months ago

sunpian1 commented 3 months ago

(mlperf) susie.sun@yizhu-R5300-G5:~$ cmr "run mlperf inference generate-run-cmds _submission" --quiet --submitter="MLCommons" --hw_name=default --model=resnet50 --implementation=reference --backend=tf --device=gpu --scenario=Offline --adr.compiler.tags=gcc --target_qps=1 --category=edge --division=open

Path to Python: /home/susie.sun/anaconda3/envs/mlperf/bin/python3 Python version: 3.10.0

Path to the MLPerf inference benchmark configuration file: /home/susie.sun/CM/repos/local/cache/2c8c91d452654dd5/inference/mlperf.conf Path to MLPerf inference benchmark sources: /home/susie.sun/CM/repos/local/cache/2c8c91d452654dd5/inference

Path to the MLPerf inference benchmark configuration file: /home/susie.sun/CM/repos/local/cache/2c8c91d452654dd5/inference/mlperf.conf Path to MLPerf inference benchmark sources: /home/susie.sun/CM/repos/local/cache/2c8c91d452654dd5/inference

     ! call "postprocess" from /home/susie.sun/CM/repos/mlcommons@cm4mlops/script/get-mlperf-inference-utils/customize.py

Using MLCommons Inference source from /home/susie.sun/CM/repos/local/cache/2c8c91d452654dd5/inference

Running loadgen scenario: Offline and mode: performance

CM error: no scripts were found with above tags and variations

variation tags ['reference', 'resnet50', 'tf', 'gpu', 'test', 'offline'] are not matching for the found script app-mlperf-inference with variations dictkeys(['cpp', 'mil', 'mlcommons-cpp', 'ctuning-cpp-tflite', 'tflite-cpp', 'reference', 'python', 'nvidia', 'mlcommons-python', 'reference,gptj', 'reference,sdxl', 'reference,dlrm-v2', 'reference,llama2-70b', 'reference,resnet50', 'reference,retinanet', 'reference,bert', 'nvidia-original', 'intel', 'intel-original', 'intel-original,gptj', 'intel-original,gptj,build-harness', 'qualcomm', 'kilt', 'kilt,qaic,resnet50', 'kilt,qaic,retinanet', 'kilt,qaic,bert-99', 'kilt,qaic,bert-99.9', 'intel-original,resnet50', 'intel-original,retinanet', 'intel-original,bert-99', 'intel-original,bert-99.9', 'intel-original,gptj-99', 'intel-original,gptj-99.9', 'resnet50', 'retinanet', '3d-unet-99', '3d-unet-99.9', '3d-unet', 'sdxl', 'llama2-70b', 'llama2-70b-99', 'llama2-70b-99.9', 'rnnt', 'rnnt,reference', 'gptj-99', 'gptj-99.9', 'gptj', 'gptj', 'bert', 'bert-99', 'bert-99.9', 'dlrm_', 'dlrm-v2-99', 'dlrm-v2-99.9', 'mobilenet', 'efficientnet', 'onnxruntime', 'tensorrt', 'tf', 'pytorch', 'ncnn', 'deepsparse', 'tflite', 'glow', 'tvm-onnx', 'tvm-pytorch', 'tvm-tflite', 'ray', 'cpu', 'cuda', 'rocm', 'qaic', 'tpu', 'fast', 'test', 'valid,retinanet', 'valid', 'quantized', 'fp32', 'float32', 'float16', 'bfloat16', 'int4', 'int8', 'uint8', 'offline', 'multistream', 'singlestream', 'server', 'power', 'batch_size.#', 'r2.1_default', 'r3.0_default', 'r3.1_default', 'r4.0_default']) !

arjunsuresh commented 3 months ago

It should be --device=cuda or --device=cpu and not --device=gpu. Also, please follow the new docs site for updated CM commands.

sunpian1 commented 3 months ago

ok