chuanqi129 / inductor-tools

For TorchInductor develop and benchmark
5 stars 1 forks source link

Guilty commit cmd line alignment with runner.py #128

Open WeizhuoZhang-intel opened 2 months ago

WeizhuoZhang-intel commented 2 months ago

Hi @chuanqi129,

Currently guilty commit inductor single run cannot support other backend except inductor. https://github.com/chuanqi129/inductor-tools/blob/main/scripts/modelbench/inductor_single_run.sh#L53-L58

Flag_extra=""
if [[ ${FREEZE} == "on" ]]; then
    export TORCHINDUCTOR_FREEZING=1
    echo "Testing with freezing on."
    Flag_extra="--freezing " 
fi 

Can we read the cmd line paramerters from TABLE in runner.py? So we don't have to update some of the options when enabling a new backend. https://github.com/pytorch/pytorch/blob/main/benchmarks/dynamo/runner.py#L64-L96

TABLE = {
    "training": {
        "ts_nnc": "--training --speedup-ts ",
        "ts_nvfuser": "--training --nvfuser --speedup-dynamo-ts ",
        "eager": "--training --backend=eager ",
        "aot_eager": "--training --backend=aot_eager ",
        "cudagraphs": "--training --backend=cudagraphs ",
        "aot_nvfuser": "--training --nvfuser --backend=aot_ts_nvfuser ",
        "nvprims_nvfuser": "--training --backend=nvprims_nvfuser ",
        "inductor": "--training --inductor ",
        "inductor_no_cudagraphs": "--training --inductor --disable-cudagraphs ",
        "inductor_max_autotune": "--training --inductor --inductor-compile-mode max-autotune ",
        "inductor_max_autotune_no_cudagraphs": (
            "--training --inductor --inductor-compile-mode max-autotune-no-cudagraphs --disable-cudagraphs "
        ),
    },
    "inference": {
        "aot_eager": "--inference --backend=aot_eager ",
        "eager": "--inference --backend=eager ",
        "ts_nnc": "--inference --speedup-ts ",
        "ts_nvfuser": "--inference -n100 --speedup-ts --nvfuser ",
        "trt": "--inference -n100 --speedup-trt ",
        "ts_nvfuser_cudagraphs": "--inference --backend=cudagraphs_ts ",
        "inductor": "--inference -n50 --inductor ",
        "inductor_no_cudagraphs": "--inference -n50 --inductor --disable-cudagraphs ",
        "inductor_max_autotune": "--inference -n50 --inductor --inductor-compile-mode max-autotune ",
        "inductor_max_autotune_no_cudagraphs": (
            "--inference -n50 --inductor --inductor-compile-mode max-autotune-no-cudagraphs --disable-cudagraphs "
        ),
        "torchscript-onnx": "--inference -n5 --torchscript-onnx",
        "dynamo-onnx": "--inference -n5 --dynamo-onnx",
    },
}
chuanqi129 commented 2 months ago

go ahead