Open snarayan21 opened 1 year ago
Hey, is there anything else I can provide to help solve this? This is a major issue we're seeing for many of our models at this point. Appreciate your help, thank you!
The minifier script is not helpful. Are you able to run some ablation experiments; e.g., can you try with backend="aot_eager"
and see if converges that way?
@snarayan21 please try as directed
Hey @ezyang @voznesenskym apologies for the delay in getting around to this. I just ran with backend="aot_eager"
on a smaller model and this does converge (no compile: orange, overlaps with aot_eager in blue, and without aot_eager shows no change in train loss):
According to this page the issue is with TorchInductor, but how would I go about root-causing this?
Thank you for your help!
You could try running the accuracy minifier; chances are it's not going to work, but sometimes you get lucky. https://pytorch.org/docs/stable/torch.compiler_troubleshooting.html
A full set of debug logs, ala TORCH_LOGS=+dynamo,+aot,+inductor may help. If you have instructions to reproduce the training that might help too. It converging on aot_eager is a clear indication that it's an Inductor problem. If you can try aot_eager_decomp_partition
that also will give more signal if it's a decomp problem.
I just tested this with torch 2.2.0 and the issue persists (see below). I previously did use the accuracy minifier -- would you recommend using that with backend="aot_eager"
? Or is there another way to diagnose what's being compiled wrong?
Okay, I confirmed the error does still happen with aot_eager_decomp_partition
suggesting that it may be a decomp problem. How do we debug further?
@Skylion007 were you able to repro locally? The repro given above looks like a failure in the minifier.
A repro would help a bunch with narrowing down further. A few things I would try next if I could repro are:
(1) also run with backend="aot_eager"
.:
(1a) if it passes, then... there are still a couple options, but one likely culprit is one of the inductor decomps, which are run in aot_eager_decomp_partition
but not aot_eager
. You could bisect them by removing decomps from here.
(1b) If it fails, but compile(backend="eager")
passes, then there are also a few options: AOTAutograd bug, functionalization bug, custom ops issues, and a few others. In this case, one useful thing to check would be if there are any (non-ATen) custom operators in your model.
@bdhirsh I was able to repro locally, and I some rough repo have instructions now and it should repro in as little as half an hour of training:
compile_config = {"backend": "aot_eager_decomp_partition"}
. Let me know if you need more details to repro.
@Skylion007 I haven't been able to fully repro. Pasting what I've done so far + current issues below:
Stuff I did so far
Mostly just followed the readme steps + hit a few snags (jotting them down here)
comment out an sklearn import due to https://github.com/pybind/pybind11/discussions/3453 (I don't seem to have GLIBCXX_3.4.29
on my machine)
upgrade my datasets library due to https://stackoverflow.com/questions/77433096/notimplementederror-loading-a-dataset-cached-in-a-localfilesystem-is-not-suppor
I had a newer version of triton, so had to manually fix some bits of the triton flash attention impl used in bert: https://github.com/openai/triton/issues/1098
Had to follow readme to change the yaml from "split: train" to "split: train_small"
Had to install apex
This was enough to get composer main.py yamls/main/mosaic-bert-base-uncased.yaml
running properly with eager mode. When I re-ran it a day later though, I got some weird shm errors - I worked around it by manually changing / copying over the data dir, data_local: ./my-copy-c4
, to data_local: ./my-copy2-c4
.
Current issue
composer main.py yamls/main/mosaic-bert-base-uncased.yaml
is no longer running properly for me - I now get this issue:
Initializing model...
n_params=1.3740e+08
Building train loader...
Traceback (most recent call last):
File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/main.py", line 272, in <module>
main(cfg)
File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/main.py", line 177, in main
train_loader = build_dataloader(
File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/main.py", line 134, in build_dataloader
return text_data_module.build_text_dataloader(cfg, tokenizer,
File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/src/text_data.py", line 274, in build_text_dataloader
dataset = StreamingTextDataset(
File "/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/src/text_data.py", line 134, in __init__
super().__init__(
File "/home/hirsheybar/local/b/pytorch-env/lib/python3.10/site-packages/streaming/base/dataset.py", line 325, in __init__
self._shm_prefix, self._locals_shm = get_shm_prefix(my_locals, world)
File "/home/hirsheybar/local/b/pytorch-env/lib/python3.10/site-packages/streaming/base/shared.py", line 340, in get_shm_prefix
raise ValueError(f'Reused local directory: {sorted(my_locals_set)} vs ' +
ValueError: Reused local directory: ['/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/my-copy-c4/train_small'] vs ['/data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/my-copy-c4/train_small']. Provide a different one.
I'm not really sure how to interpret that error message. At one point I added a breakpoint()
inside dynamo, which was probably a mistake since the repro is running a distributed harness (no idea if that's related to the issue that I'm now seeing though).
To be clear, I wasn't seeing that error message a few days ago but I am now. I tried this:
rm -rf ./my-copy-c4
python src/convert_dataset.py --dataset c4 --data_subset en --out_root ./my-copy-c4 --splits train_small val
But I'm getting the same error.
Okay yeah so you need to delete your local
directory if you change the dataset at all or if you previous convert_dataset failed for any reason. so you probably just need to delete /data/users/hirsheybar/b/pytorch/examples/examples/benchmarks/bert/my-copy-c4/train_small
and regenerate the dataset. Essentially, it will not overwrite a local directory has a cached directory as this is usually an error, so you will need to just regenerate a new one.
I started new PR here that removes the triton dependency feel free to give it a whirl: https://github.com/mosaicml/examples/pull/440 and removes other problematic dependencies. You can also train wihtout the FlashAttention dependencies (on a way, way smaller batch size) and I suspect you will run into the same issue). You also do not need to install apex anymore, if you are PyTorch 2.0> you can switch the algorithm in the yaml to LowPrecisionLayerNorm instead. I will update the YAML in the PR to use that. @bdhirsh
I just realized Stable Diffusion and BERT are both skipped in the latest benchmark tests so it's possible the issue could be more widespread: https://github.com/pytorch/pytorch/blob/139c4ab69da404d4e0c0d099a728c74ce06e341b/benchmarks/dynamo/torchbench.py#L237
@bdhirsh's fix here might have also fixed this issue: https://github.com/pytorch/pytorch/issues/116935#issuecomment-1935115652
fingers crossed.
@Skylion007 @snarayan21 Can you please test with the latest nightlies and see if the issue has been resolved?
I'm helping scrub old issues this week. @Skylion007 @snarayan21 from the comment above, are one of you able to check if the issue has been resolved?
@Skylion007 do you know if this is still broken? If you think it is, I can take another shot at following the repro steps that you listed back in https://github.com/pytorch/pytorch/issues/113180#issuecomment-1877456310 (I am still holding out hope that this was the same underlying issue as https://github.com/pytorch/pytorch/issues/116935, although that might not be the case)
The BERT problem was solved more or less, SD2 hits a different issue with a bug at the intersection of mixed precision, torch compile, FSDP-1, and evalling after training for a bit and saving, but it does seem like the BERT problem is more or less fixed.
@snarayan21 Will try reproing the original issue later this week.
🐛 Describe the bug
We are facing issues with loss curves and reproducibility when using
torch.compile()
with our models. Attached below is a graph of train loss with runs withtorch.compile()
(higher loss) and runs without (lower loss). This model is an MPT-style transformer, but we've also seen the issue occur with evaluation for an autoencoder setup (also shown below). Would love to address this issue as soon as possible!Higher train loss:
Worse eval scores (orange and turquoise are with
torch.compile()
:Error logs
Here's the error log we get from running
python minifier_launcher.py
:Minified repro
Versions
PyTorch version: 2.2.0a0+git21b6030 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: Could not collect CMake version: version 3.26.3 Libc version: glibc-2.31
Python version: 3.10.13 (main, Aug 25 2023, 13:20:03) [GCC 9.4.0] (64-bit runtime) Python platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 12.1.105 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB
Nvidia driver version: 515.48.07 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True
CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 48 bits physical, 48 bits virtual CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 1 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 25 Model: 1 Model name: AMD EPYC 7513 32-Core Processor Stepping: 1 Frequency boost: enabled CPU MHz: 1435.899 CPU max MHz: 2600.0000 CPU min MHz: 1500.0000 BogoMIPS: 5199.66 Virtualization: AMD-V L1d cache: 2 MiB L1i cache: 2 MiB L2 cache: 32 MiB L3 cache: 256 MiB NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries: [pip3] numpy==1.26.0 [pip3] onnx==1.14.0 [pip3] onnxruntime==1.15.1 [pip3] optree==0.9.2 [pip3] pytorch-ranger==0.1.1 [pip3] pytorch-triton==2.1.0+6e4932cda8 [pip3] torch==2.2.0a0+git21b6030 [pip3] torch-optimizer==0.3.0 [pip3] torchmetrics==1.0.3 [pip3] torchvision==0.16.0+cu121 [pip3] triton-nightly==2.1.0.dev20230726014945 [pip3] triton-pre-mlir==2.0.0 [conda] Could not collect
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @wconstab @bdhirsh @anijain2305