msakarvadia / AttentionLens

Interpretating the latent space representations of attention head outputs for LLMs
MIT License
24 stars 3 forks source link

Error when training in train_pl.py #11

Closed 123epsilon closed 1 year ago

123epsilon commented 1 year ago

Running this on my Mac with CPU gives:

Dataset bookcorpus downloaded and prepared to /Users/arhamkhan/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/eddee3cae1cc263a431aa98207d4d27fd8a73b0a9742f692af0e6c65afa4d75f. Subsequent calls will reuse this data.
Using pad_token, but it is not set yet.
Loaded pretrained model gpt2-small into HookedTransformer
model created on device:  cpu
Traceback (most recent call last):
  File "/Users/arhamkhan/Projects/AttentionLens/attention_lense/train/train_pl.py", line 156, in <module>
    trainer.fit(model, data_module)
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 529, in fit
    call._call_and_handle_interrupt(
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 41, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 91, in launch
    return function(*args, **kwargs)
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 568, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 949, in _run
    self.strategy.setup(self)
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py", line 164, in setup
    self.configure_ddp()
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py", line 269, in configure_ddp
    self.model = self._setup_model(_LightningModuleWrapperBase(self.model))
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/pytorch_lightning/strategies/ddp.py", line 183, in _setup_model
    return DistributedDataParallel(module=model, device_ids=device_ids, **self._ddp_kwargs)
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 676, in __init__
    _sync_module_states(
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/torch/distributed/utils.py", line 142, in _sync_module_states
    _sync_params_and_buffers(
  File "/Users/arhamkhan/miniconda3/envs/attnlens/lib/python3.10/site-packages/torch/distributed/utils.py", line 160, in _sync_params_and_buffers
    dist._broadcast_coalesced(
RuntimeError: Invalid scalar type

This seems to be an issue with the call to the dist module by PyTorchLightning

msakarvadia commented 1 year ago

Let me look into this further - there is a chance that I didn't configure PyTorchLighting to run on CPUs. But I will get back to you.

123epsilon commented 1 year ago

Yeah could be, or maybe some issue with trying to distribute a model across CPUs that PTL doesn't check - but sounds good

msakarvadia commented 1 year ago

I had the same error on my windows.

nathaniel-hudson commented 1 year ago

@123epsilon, is your computer where you had this issue on an Apple Silicon-based Mac or an Intel-based Mac?

123epsilon commented 1 year ago

@nathaniel-hudson Its Apple Silicon - an Apple M2 Max chip specifically - no CUDA support on my machine

msakarvadia commented 1 year ago

close with https://github.com/msakarvadia/AttentionLens/commit/665e026ee2f04f0d0113a9647628aad4259207c0