pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
79.98k stars 21.51k forks source link

ROCm: foreach is substantially slower than for loop MI250x #120308

Open netw0rkf10w opened 4 months ago

netw0rkf10w commented 4 months ago

🐛 Describe the bug

foreach is much slower than for-loop in ROCm.

Tested on an MI250x:

$ python benchmark_optimizers.py 
eager:      789.9033987810321us
foreach:    1289.8528449295554us
capturable: 1702.5857799789096us
fused:      477.9188240063377us

This does not happen on an NVIDIA A100:

$ python benchmark_optimizers.py 
eager:      646.9360729679465us
foreach:    574.848392046988us
capturable: 1437.913635163568us
fused:      351.5243310248479us

To reproduce, please use the following script:

import torch
import torch.utils.benchmark as benchmark

# Let's define a helpful benchmarking function:
def benchmark_torch_function_in_microseconds(f, *args, **kwargs):
    t0 = benchmark.Timer(
        stmt="f(*args, **kwargs)", globals={"args": args, "kwargs": kwargs, "f": f}
    )
    return t0.blocked_autorange().mean * 1e6

def main():
    model = torch.nn.Sequential(
        *[torch.nn.Linear(1024, 1024, False, device="cuda") for _ in range(10)]
    )
    input = torch.rand(1024, device="cuda")
    output = model(input)
    output.sum().backward()

    opt = torch.optim.AdamW(model.parameters(), lr=0.01, foreach=False)
    opt_foreach = torch.optim.AdamW(model.parameters(), lr=0.01, foreach=True)
    opt_fused = torch.optim.AdamW(model.parameters(), lr=0.01, fused=True)
    opt_capturable = torch.optim.AdamW(model.parameters(), lr=0.01, fused=False, capturable=True)

    eager_runtime = benchmark_torch_function_in_microseconds(opt.step)
    print(f"eager:\t\t{eager_runtime}us")

    torch.cuda.synchronize()

    foreach_runtime = benchmark_torch_function_in_microseconds(opt_foreach.step)
    print(f"foreach:\t{foreach_runtime}us")

    torch.cuda.synchronize()

    capturable_runtime = benchmark_torch_function_in_microseconds(opt_capturable.step)
    print(f"capturable:\t{capturable_runtime}us")

    torch.cuda.synchronize()

    fused_runtime = benchmark_torch_function_in_microseconds(opt_fused.step)
    print(f"fused:\t\t{fused_runtime}us")

    torch.cuda.synchronize()

if __name__ == '__main__':
    main()

Versions

PyTorch version: 2.2.0+rocm5.7 Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 5.7.31921-d1770ee1b

OS: Red Hat Enterprise Linux release 8.8 (Ootpa) (x86_64) GCC version: (GCC) 12.2.1 20221121 (Red Hat 12.2.1-7) Clang version: 15.0.7 (Red Hat 15.0.7-1.module+el8.8.0+17939+b58878af) CMake version: version 3.28.3 Libc version: glibc-2.28

Python version: 3.10.10 (main, Apr 14 2023, 19:33:04) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)] (64-bit runtime) Python platform: Linux-4.18.0-477.10.1.el8_8.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Instinct MI250X (gfx90a:sramecc+:xnack-) Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 5.7.31921 MIOpen runtime version: 2.20.0 Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 128 On-line CPU(s) list: 0-127 Thread(s) per core: 2 Core(s) per socket: 64 Socket(s): 1 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 25 Model: 48 Model name: AMD EPYC 7A53 64-Core Processor Stepping: 1 CPU MHz: 2000.000 CPU max MHz: 3541.0149 CPU min MHz: 1500.0000 BogoMIPS: 3992.55 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 32768K NUMA node0 CPU(s): 0-15,64-79 NUMA node1 CPU(s): 16-31,80-95 NUMA node2 CPU(s): 32-47,96-111 NUMA node3 CPU(s): 48-63,112-127 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm

Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] torch==2.2.0+rocm5.7 [pip3] torchaudio==2.2.0+rocm5.7 [pip3] torching==0.0.1 [pip3] torchvision==0.17.0+rocm5.7 [pip3] triton==2.1.0 [conda] Could not collect

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang

hongxiayang commented 3 months ago

reproduced.