Closed isdj closed 1 year ago
1) when there are multiple mins torch.compile and eager are not guaranteed to return the same index, that's a known limitation for now 2) for argmin that's a real bug, we should fix that, luckily with recent updates to triton by @peterbell10 that should be possible.
🐛 Describe the bug
When running a job in PyTorch 2.0 we are seeing erroneous behavior when running torch.argmin. It appears that the compiled model does not return the expected argmin and even sometimes sums the indices that have the min value resulting in an index that is out of range.
for example, in the attached script the following is the output:
I have attached a training script and numpy input tensor for reproduction of the output above.
We are running in AWS Sagemaker with the following docker image: 763104351884.dkr.ecr.us-east-1.amazonaws.com/pytorch-training:2.0.0-gpu-py310-cu118-ubuntu20.04-sagemaker
After discussing with AWS team, they claim it is a PT error and not a container issue. input_and_training_script.zip
Versions
Collecting environment information... PyTorch version: 2.0.0 Is debug build: False CUDA used to build PyTorch: 11.8 ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 Clang version: Could not collect CMake version: version 3.26.3 Libc version: glibc-2.31
Python version: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0] (64-bit runtime) Python platform: Linux-5.10.173-154.642.amzn2.x86_64-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 11.8.89 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A10G Nvidia driver version: 525.85.12 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 2 Core(s) per socket: 2 Socket(s): 1 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7R32 Stepping: 0 CPU MHz: 3175.298 BogoMIPS: 5599.87 Hypervisor vendor: KVM Virtualization type: full L1d cache: 64 KiB L1i cache: 64 KiB L2 cache: 1 MiB L3 cache: 8 MiB NUMA node0 CPU(s): 0-3 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, STIBP always-on, RSB filling Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid
Versions of relevant libraries: [pip3] numpy==1.23.5 [pip3] sagemaker-pytorch-training==2.7.0 [pip3] torch==2.0.0 [pip3] torchaudio==2.0.1 [pip3] torchdata==0.6.0 [pip3] torchnet==0.0.4 [pip3] torchtext==0.15.1 [pip3] torchvision==0.15.1 [pip3] triton==2.0.0.dev20221202 [conda] blas 1.0 mkl conda-forge [conda] mkl 2023.0.0 h84fe81f_26648 conda-forge [conda] mkl-include 2023.0.0 h84fe81f_26648 conda-forge [conda] numpy 1.23.5 py310h53a5b5f_0 conda-forge [conda] pytorch 2.0.0 aws_py3.10_cuda11.8_cudnn8.7.0_0 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com/ [conda] pytorch-cuda 11.8 h7e8668a_3 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com/ [conda] pytorch-mutex 1.0 cuda https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com/ [conda] sagemaker-pytorch-training 2.7.0 pypi_0 pypi [conda] torchaudio 2.0.1 py310_cu118 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com/ [conda] torchdata 0.6.0 py310 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com/ [conda] torchnet 0.0.4 pypi_0 pypi [conda] torchtext 0.15.1 py310 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com/ [conda] torchvision 0.15.1 py310_cu118 https://aws-ml-conda-pre-prod-ec2.s3.us-west-2.amazonaws.com/ [conda] triton 2.0.0.dev20221202 pypi_0 pypi
cc @ezyang @gchanan @zou3519 @soumith @msaroufim @wconstab @ngimel @bdhirsh @voznesenskym @penguinwu @anijain2305 @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @Xia-Weiwen @wenzhe-nrv @jiayisunx @peterbell10 @desertfire