mlcommons / inference

Reference implementations of MLPerf™ inference benchmarks
https://mlcommons.org/en/groups/inference
Apache License 2.0
1.19k stars 519 forks source link

ResNet50 inference command error #1819

Open xeasonx opened 1 month ago

xeasonx commented 1 month ago

I followed the document to inference ResNet50, using MLCommons-Python -> edge -> Tensorflow -> CUDA -> Native The command is

cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1 \
   --model=resnet50 \
   --implementation=reference \
   --framework=tensorflow \
   --category=edge \
   --scenario=Offline \
   --execution_mode=test \
   --device=cuda  \
   --quiet \
   --test_query_count=1000

And shows CM error: no scripts were found with above tags and variations, what's wrong? The complete output is

INFO:root:* cm run script "run-mlperf inference _find-performance _full _r4.1"
INFO:root:  * cm run script "detect os"
INFO:root:         ! cd /home/eason/ml_workspace
INFO:root:         ! call /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:         ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:  * cm run script "detect cpu"
INFO:root:    * cm run script "detect os"
INFO:root:           ! cd /home/eason/ml_workspace
INFO:root:           ! call /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:           ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:         ! cd /home/eason/ml_workspace
INFO:root:         ! call /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-cpu/run.sh from tmp-run.sh
INFO:root:         ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-cpu/customize.py
INFO:root:  * cm run script "get python3"
INFO:root:       ! load /home/eason/CM/repos/local/cache/a41bcff19d784ed7/cm-cached-state.json
INFO:root:Path to Python: /home/eason/ml_workspace/py_venv/cm/bin/python3
INFO:root:Python version: 3.10.12
INFO:root:  * cm run script "get mlcommons inference src"
INFO:root:       ! load /home/eason/CM/repos/local/cache/c73bd23e67734e1b/cm-cached-state.json
INFO:root:  * cm run script "get sut description"
INFO:root:    * cm run script "detect os"
INFO:root:           ! cd /home/eason/ml_workspace
INFO:root:           ! call /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:           ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:    * cm run script "detect cpu"
INFO:root:      * cm run script "detect os"
INFO:root:             ! cd /home/eason/ml_workspace
INFO:root:             ! call /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:             ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:           ! cd /home/eason/ml_workspace
INFO:root:           ! call /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-cpu/run.sh from tmp-run.sh
INFO:root:           ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/detect-cpu/customize.py
INFO:root:    * cm run script "get python3"
INFO:root:         ! load /home/eason/CM/repos/local/cache/a41bcff19d784ed7/cm-cached-state.json
INFO:root:Path to Python: /home/eason/ml_workspace/py_venv/cm/bin/python3
INFO:root:Python version: 3.10.12
INFO:root:    * cm run script "get compiler"
INFO:root:         ! load /home/eason/CM/repos/local/cache/0e347360381e4212/cm-cached-state.json
INFO:root:    * cm run script "get cuda-devices"
INFO:root:      * cm run script "get cuda _toolkit"
INFO:root:           ! load /home/eason/CM/repos/local/cache/00ba83f0c4944122/cm-cached-state.json
INFO:root:ENV[CM_CUDA_PATH_LIB_CUDNN_EXISTS]: no
INFO:root:ENV[CM_CUDA_VERSION]: 12.5
INFO:root:ENV[CM_CUDA_VERSION_STRING]: cu125
INFO:root:ENV[CM_NVCC_BIN_WITH_PATH]: /usr/local/cuda/bin/nvcc
INFO:root:ENV[CUDA_HOME]: /usr/local/cuda
INFO:root:           ! cd /home/eason/ml_workspace
INFO:root:           ! call /home/eason/CM/repos/mlcommons@cm4mlops/script/get-cuda-devices/run.sh from tmp-run.sh
rm: cannot remove 'a.out': No such file or directory

Checking compiler version ...

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Thu_Jun__6_02:18:23_PDT_2024
Cuda compilation tools, release 12.5, V12.5.82
Build cuda_12.5.r12.5/compiler.34385749_0

Compiling program ...

Running program ...

/home/eason/ml_workspace
INFO:root:           ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/get-cuda-devices/customize.py
GPU Device ID: 0
GPU Name: NVIDIA GeForce GTX 1070 Ti
GPU compute capability: 6.1
CUDA driver version: 12.2
CUDA runtime version: 12.5
Global memory: 8504934400
Max clock rate: 1683.000000 MHz
Total amount of shared memory per block: 49152
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor:  2048
Maximum number of threads per block: 1024
Max dimension size of a thread block X: 1024
Max dimension size of a thread block Y: 1024
Max dimension size of a thread block Z: 64
Max dimension size of a grid size X: 2147483647
Max dimension size of a grid size Y: 65535
Max dimension size of a grid size Z: 65535

INFO:root:    * cm run script "get generic-python-lib _package.dmiparser"
INFO:root:         ! load /home/eason/CM/repos/local/cache/5fc4395af0ad4957/cm-cached-state.json
INFO:root:    * cm run script "get cache dir _name.mlperf-inference-sut-descriptions"
INFO:root:         ! load /home/eason/CM/repos/local/cache/071b923c360f4f19/cm-cached-state.json
Generating SUT description file for eason_Precision_Tower_5810-tensorflow
INFO:root:         ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/get-mlperf-inference-sut-description/customize.py
INFO:root:  * cm run script "get mlperf inference results dir"
INFO:root:       ! load /home/eason/CM/repos/local/cache/3c30031394d145c4/cm-cached-state.json
INFO:root:  * cm run script "install pip-package for-cmind-python _package.tabulate"
INFO:root:       ! load /home/eason/CM/repos/local/cache/51d85c7579234247/cm-cached-state.json
INFO:root:  * cm run script "get mlperf inference utils"
INFO:root:    * cm run script "get mlperf inference src"
INFO:root:         ! load /home/eason/CM/repos/local/cache/c73bd23e67734e1b/cm-cached-state.json
INFO:root:         ! call "postprocess" from /home/eason/CM/repos/mlcommons@cm4mlops/script/get-mlperf-inference-utils/customize.py
Using MLCommons Inference source from /home/eason/CM/repos/local/cache/cade730fd2ef4531/inference

Running loadgen scenario: Offline and mode: performance
INFO:root:* cm run script "app mlperf inference generic _reference _resnet50 _tensorflow _cuda _test _r4.1_default _offline"

CM error: no scripts were found with above tags and variations

variation tags ['reference', 'resnet50', 'tensorflow', 'cuda', 'test', 'r4.1_default', 'offline'] are not matching for the found script app-mlperf-inference with variations dict_keys(['cpp', 'mil', 'mlcommons-cpp', 'ctuning-cpp-tflite', 'tflite-cpp', 'reference', 'python', 'nvidia', 'mlcommons-python', 'reference,gptj_', 'reference,sdxl_', 'reference,dlrm-v2_', 'reference,llama2-70b_', 'reference,mixtral-8x7b', 'reference,resnet50', 'reference,retinanet', 'reference,bert_', 'nvidia-original,r4.1-dev_default', 'nvidia-original,r4.1-dev_default,gptj_', 'nvidia-original,r4.1_default', 'nvidia-original,r4.1_default,gptj_', 'nvidia-original,r4.1-dev_default,llama2-70b_', 'nvidia-original,r4.1_default,llama2-70b_', 'nvidia-original', 'intel', 'intel-original', 'intel-original,gptj_', 'redhat', 'qualcomm', 'kilt', 'kilt,qaic,resnet50', 'kilt,qaic,retinanet', 'kilt,qaic,bert-99', 'kilt,qaic,bert-99.9', 'intel-original,resnet50', 'intel-original,retinanet', 'intel-original,bert-99', 'intel-original,bert-99.9', 'intel-original,gptj-99', 'intel-original,gptj-99.9', 'resnet50', 'retinanet', '3d-unet-99', '3d-unet-99.9', '3d-unet_', 'sdxl', 'llama2-70b_', 'llama2-70b-99', 'llama2-70b-99.9', 'mixtral-8x7b', 'rnnt', 'rnnt,reference', 'gptj-99', 'gptj-99.9', 'gptj', 'gptj_', 'bert_', 'bert-99', 'bert-99.9', 'dlrm_', 'dlrm-v2-99', 'dlrm-v2-99.9', 'dlrm_,nvidia', 'mobilenet', 'efficientnet', 'onnxruntime', 'tensorrt', 'tf', 'pytorch', 'openshift', 'ncnn', 'deepsparse', 'tflite', 'glow', 'tvm-onnx', 'tvm-pytorch', 'tvm-tflite', 'ray', 'cpu', 'cuda,reference', 'cuda', 'rocm', 'qaic', 'tpu', 'fast', 'test', 'valid,retinanet', 'valid', 'quantized', 'fp32', 'float32', 'float16', 'bfloat16', 'int4', 'int8', 'uint8', 'offline', 'multistream', 'singlestream', 'server', 'power', 'batch_size.#', 'r2.1_default', 'r3.0_default', 'r3.1_default', 'r4.0-dev_default', 'r4.0_default', 'r4.1-dev_default', 'r4.1_default'])
arjunsuresh commented 1 month ago

can you please do cm pull repo and retry the command?

howudodat commented 1 month ago

same issue here

adm@ml:~$ cm pull repo
=======================================================
Alias:    mlcommons@cm4mlops

Local path: /home/adm/CM/repos/mlcommons@cm4mlops

git pull

remote: Enumerating objects: 181, done.
remote: Counting objects: 100% (166/166), done.
remote: Compressing objects: 100% (58/58), done.
remote: Total 181 (delta 112), reused 161 (delta 108), pack-reused 15
Receiving objects: 100% (181/181), 68.93 KiB | 1.77 MiB/s, done.
Resolving deltas: 100% (114/114), completed with 16 local objects.
From https://github.com/mlcommons/cm4mlops
 + 35d28c139...bb842e795 gh-pages         -> origin/gh-pages  (forced update)
   e4e422d1f..1fa585a91  mlperf-inference -> origin/mlperf-inference
Already up to date.

CM alias for this repository: mlcommons@cm4mlops
=======================================================

Reindexing all CM artifacts. Can take some time ...
Took 1.8 sec.
adm@ml:~$ cm run script --tags=run-mlperf,inference,_find-performance,_full,_r4.1    --model=resnet50    --implementation=reference    --framework=tensorflow    --category=edge    --scenario=Offline    --execution_mode=test    --device=cuda     --quiet    --test_query_count=1000
INFO:root:* cm run script "run-mlperf inference _find-performance _full _r4.1"
INFO:root:  * cm run script "detect os"
INFO:root:         ! cd /home/adm
INFO:root:         ! call /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:         ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:  * cm run script "detect cpu"
INFO:root:    * cm run script "detect os"
INFO:root:           ! cd /home/adm
INFO:root:           ! call /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:           ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:         ! cd /home/adm
INFO:root:         ! call /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-cpu/run.sh from tmp-run.sh
INFO:root:         ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-cpu/customize.py
INFO:root:  * cm run script "get python3"
INFO:root:       ! load /home/adm/CM/repos/local/cache/30dedbee20cd4879/cm-cached-state.json
INFO:root:Path to Python: /usr/bin/python3
INFO:root:Python version: 3.10.12
INFO:root:  * cm run script "get mlcommons inference src"
INFO:root:       ! load /home/adm/CM/repos/local/cache/966516d5f44c4a7d/cm-cached-state.json
INFO:root:  * cm run script "get sut description"
INFO:root:    * cm run script "detect os"
INFO:root:           ! cd /home/adm
INFO:root:           ! call /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:           ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:    * cm run script "detect cpu"
INFO:root:      * cm run script "detect os"
INFO:root:             ! cd /home/adm
INFO:root:             ! call /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/run.sh from tmp-run.sh
INFO:root:             ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-os/customize.py
INFO:root:           ! cd /home/adm
INFO:root:           ! call /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-cpu/run.sh from tmp-run.sh
INFO:root:           ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/detect-cpu/customize.py
INFO:root:    * cm run script "get python3"
INFO:root:         ! load /home/adm/CM/repos/local/cache/30dedbee20cd4879/cm-cached-state.json
INFO:root:Path to Python: /usr/bin/python3
INFO:root:Python version: 3.10.12
INFO:root:    * cm run script "get compiler"
INFO:root:         ! load /home/adm/CM/repos/local/cache/954faef8afdb4746/cm-cached-state.json
INFO:root:    * cm run script "get cuda-devices"
INFO:root:      * cm run script "get cuda _toolkit"
INFO:root:           ! load /home/adm/CM/repos/local/cache/f6eaeed72d1a48c6/cm-cached-state.json
INFO:root:ENV[CM_CUDA_PATH_LIB_CUDNN_EXISTS]: no
INFO:root:ENV[CM_CUDA_VERSION]: 11.5
INFO:root:ENV[CM_CUDA_VERSION_STRING]: cu115
INFO:root:ENV[CM_NVCC_BIN_WITH_PATH]: /usr/bin/nvcc
INFO:root:ENV[CUDA_HOME]: /usr
INFO:root:           ! cd /home/adm
INFO:root:           ! call /home/adm/CM/repos/mlcommons@cm4mlops/script/get-cuda-devices/run.sh from tmp-run.sh
rm: cannot remove 'a.out': No such file or directory

Checking compiler version ...

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Nov_18_09:45:30_PST_2021
Cuda compilation tools, release 11.5, V11.5.119
Build cuda_11.5.r11.5/compiler.30672275_0

Compiling program ...

Running program ...

/home/adm
INFO:root:           ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/get-cuda-devices/customize.py
GPU Device ID: 0
GPU Name: NVIDIA RTX A2000 Embedded GPU
GPU compute capability: 8.6
CUDA driver version: 12.4
CUDA runtime version: 11.5
Global memory: 8353677312
Max clock rate: 1815.000000 MHz
Total amount of shared memory per block: 49152
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor:  1536
Maximum number of threads per block: 1024
Max dimension size of a thread block X: 1024
Max dimension size of a thread block Y: 1024
Max dimension size of a thread block Z: 64
Max dimension size of a grid size X: 2147483647
Max dimension size of a grid size Y: 65535
Max dimension size of a grid size Z: 65535

INFO:root:    * cm run script "get generic-python-lib _package.dmiparser"
INFO:root:         ! load /home/adm/CM/repos/local/cache/857273592d3a4b9b/cm-cached-state.json
INFO:root:    * cm run script "get cache dir _name.mlperf-inference-sut-descriptions"
INFO:root:         ! load /home/adm/CM/repos/local/cache/7822589759eb4a3f/cm-cached-state.json
Generating SUT description file for ml-tensorflow
INFO:root:         ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/get-mlperf-inference-sut-description/customize.py
INFO:root:  * cm run script "get mlperf inference results dir"
INFO:root:       ! load /home/adm/CM/repos/local/cache/42b132ad0f334c2f/cm-cached-state.json
INFO:root:  * cm run script "install pip-package for-cmind-python _package.tabulate"
INFO:root:       ! load /home/adm/CM/repos/local/cache/e8ad5f78f4cc40c7/cm-cached-state.json
INFO:root:  * cm run script "get mlperf inference utils"
INFO:root:    * cm run script "get mlperf inference src"
INFO:root:         ! load /home/adm/CM/repos/local/cache/966516d5f44c4a7d/cm-cached-state.json
INFO:root:         ! call "postprocess" from /home/adm/CM/repos/mlcommons@cm4mlops/script/get-mlperf-inference-utils/customize.py
Using MLCommons Inference source from /home/adm/CM/repos/local/cache/54fa44c3e04540b6/inference

Running loadgen scenario: Offline and mode: performance
INFO:root:* cm run script "app mlperf inference generic _reference _resnet50 _tensorflow _cuda _test _r4.1_default _offline"

CM error: no scripts were found with above tags and variations

variation tags ['reference', 'resnet50', 'tensorflow', 'cuda', 'test', 'r4.1_default', 'offline'] are not matching for the found script app-mlperf-inference with variations dict_keys(['cpp', 'mil', 'mlcommons-cpp', 'ctuning-cpp-tflite', 'tflite-cpp', 'reference', 'python', 'nvidia', 'mlcommons-python', 'reference,gptj_', 'reference,sdxl_', 'reference,dlrm-v2_', 'reference,llama2-70b_', 'reference,mixtral-8x7b', 'reference,resnet50', 'reference,retinanet', 'reference,bert_', 'nvidia-original,r4.1-dev_default', 'nvidia-original,r4.1-dev_default,gptj_', 'nvidia-original,r4.1_default', 'nvidia-original,r4.1_default,gptj_', 'nvidia-original,r4.1-dev_default,llama2-70b_', 'nvidia-original,r4.1_default,llama2-70b_', 'nvidia-original', 'intel', 'intel-original', 'intel-original,gptj_', 'redhat', 'qualcomm', 'kilt', 'kilt,qaic,resnet50', 'kilt,qaic,retinanet', 'kilt,qaic,bert-99', 'kilt,qaic,bert-99.9', 'intel-original,resnet50', 'intel-original,retinanet', 'intel-original,bert-99', 'intel-original,bert-99.9', 'intel-original,gptj-99', 'intel-original,gptj-99.9', 'resnet50', 'retinanet', '3d-unet-99', '3d-unet-99.9', '3d-unet_', 'sdxl', 'llama2-70b_', 'llama2-70b-99', 'llama2-70b-99.9', 'mixtral-8x7b', 'rnnt', 'rnnt,reference', 'gptj-99', 'gptj-99.9', 'gptj', 'gptj_', 'bert_', 'bert-99', 'bert-99.9', 'dlrm_', 'dlrm-v2-99', 'dlrm-v2-99.9', 'dlrm_,nvidia', 'mobilenet', 'efficientnet', 'onnxruntime', 'tensorrt', 'tf', 'pytorch', 'openshift', 'ncnn', 'deepsparse', 'tflite', 'glow', 'tvm-onnx', 'tvm-pytorch', 'tvm-tflite', 'ray', 'cpu', 'cuda,reference', 'cuda', 'rocm', 'qaic', 'tpu', 'fast', 'test', 'valid,retinanet', 'valid', 'quantized', 'fp32', 'float32', 'float16', 'bfloat16', 'int4', 'int8', 'uint8', 'offline', 'multistream', 'singlestream', 'server', 'power', 'batch_size.#', 'r2.1_default', 'r3.0_default', 'r3.1_default', 'r4.0-dev_default', 'r4.0_default', 'r4.1-dev_default', 'r4.1_default'])
!
arjunsuresh commented 1 month ago

Looks like you are not in the mlperf-inference branch. Did you install via pip install cm4mlops? Can you do cd $HOME/CM/repos/mlcommons@cm4mlops && git checkout mlperf-inference && git pull && cd -

xeasonx commented 1 month ago

@arjunsuresh Problem solved, thank you.