Closed spencerfrei closed 1 year ago
Hi Spencer,
I could replicate the issue with the test command, which was related to an update in transformers
. This should be fixed now.
I cannot replicate a dynamo problem with the final recipe. What GPU are you testing this with? One debug version you can try is to set impl._inductor_vars=null
.
Thanks for the quick response and update. I have now tried this on a few different GPU's (none of which are covered in your paper unfortunately) and have run into a few different issues, some of which is expected and some of which is unexpected.
Titan X & 1080 Ti: Things work great without using compile_torch=False
. I now get the following (expected) error with compile_torch=True
:
torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised RuntimeError: Found NVIDIA TITAN X (Pascal) which is too old to be supported by the triton GPU compiler, which is used as the backend. Triton only supports devices of CUDA Capability >= 7.0, but your device is of CUDA capability 6.1
So the GPUs are just too old to use Triton which is fine.
A10: I spun up an A10 instance on LambdaLabs, which doesn't have py3.9+ installed by default. So I create a python 3.9 virtual environment as recommended here then try the test code
python pretrain.py name=test arch=hf-bert-base train=bert-base data=sanity-check-2 dryrun=True impl.microbatch_size=2
but it results in this error. Adding the null inductor vars flag as you recommend) gives this error
Any thoughts?
Hm, these seem more like general torch.compile
problems. For the Titan X and 1080ti, you can try to disable triton parts which I'm manually enabling (set impl._inductor_vars.triton.cudagraphs=False
).
For the A10, it seems like an environment/installation problem, where torch.compile
cannot find the right C headers, I've never used a LambdaLabs instance, are you able to compile simpler models?
Re: the Titan X and 1080ti, unfortunately disabling the triton cudagraphs still results in the following error (full terminal output here):
raise BackendCompilerFailed(self.compiler_fn, e) from e torch._dynamo.exc.BackendCompilerFailed: debug_wrapper raised AssertionError: While executing %self_encoder_layers_0_attn_self_attention_query_key_value : [#users=1] = call_module[target=self_encoder_layers_0_attn_self_attention_query_key_value](args = (%self_encoder_layers_0_norm1,), kwargs = {})
I have been able to use LambdaLabs A10's with torch's compile option on other language model training code, e.g. nanoGPT. However, for that I only needed python 3.8, so I didn't have to make a new virtual env with python 3.9. What is the reason python 3.9-3.10 is needed for cramming? Is there a difficulty with allowing 3.8?
Ok, thanks for checking. With the test for assert utils.has_triton()
being required, torch compile really is doomed on the older cards.
Regarding python versions, lower versions of python are untested on my side, they might work! There might be some type hinting generics though that would require fixing with from __future__ import annotations
.
Hi! I've cloned into the latest version of cramming and tried to verify the installation with the command:
python pretrain.py name=test arch=hf-bert-base train=bert-base data=sanity-check-2 dryrun=True impl.microbatch_size=2
Doing so results in an error related to torch.dynamo, see this pastebin. On the other hand, if I append
impl.compile_torch=False
then everything runs smoothly.I believe the same error occurs with the 'replicate the final recipe' code as well - it gives a similar torch.dynamo error if one doesn't use
impl.compile_torch=False
.I tested this with torch=2.0.1 and python 3.9 and 3.10. (Note that python 3.11 doesn't work since torch.compile doesn't work with python 3.11).