lukashermann / hulc

Hierarchical Universal Language Conditioned Policies
http://hulc.cs.uni-freiburg.de
MIT License
64 stars 10 forks source link

Training crashes with task_ABCD_D dataset #17

Closed DaniAffCH closed 7 months ago

DaniAffCH commented 7 months ago

If I run the training on calvin_debug_dataset, everything works fine but if I use the real dataset task_ABCD_D the training crases after completing the initial shared memory loading. This is the stack trace:

Traceback (most recent call last):
  File "/home/jovyan/lcrs/hulc/hulc/training.py", line 171, in <module>
    train()
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/main.py", line 48, in decorated_main
    _run_hydra(
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/_internal/utils.py", line 377, in _run_hydra
    run_and_report(
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/_internal/utils.py", line 294, in run_and_report
    raise ex
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
    return func()
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/_internal/utils.py", line 378, in <lambda>
    lambda: hydra.run(
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/_internal/hydra.py", line 111, in run
    _ = ret.return_value
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/core/utils.py", line 233, in return_value
    raise self._return_value
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/hydra/core/utils.py", line 160, in run_job
    ret.return_value = task_function(task_cfg)
  File "/home/jovyan/lcrs/hulc/hulc/training.py", line 74, in train
    trainer.fit(model, datamodule=datamodule, ckpt_path=chk)  # type: ignore
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 603, in fit
    call._call_and_handle_interrupt(
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 645, in _fit_impl
    self._run(model, ckpt_path=self.ckpt_path)
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1037, in _run
    self._call_setup_hook()  # allow user to setup lightning_module in accelerator environment
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1284, in _call_setup_hook
    self._call_lightning_datamodule_hook("setup", stage=fn)
  File "/opt/conda/envs/lcrs_venv/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1361, in _call_lightning_datamodule_hook
    return fn(*args, **kwargs)
  File "/home/jovyan/lcrs/calvin/calvin_models/calvin_agent/datasets/calvin_data_module.py", line 92, in setup
    train_dataset.setup_shm_lookup(train_shm_lookup)
  File "/home/jovyan/lcrs/calvin/calvin_models/calvin_agent/datasets/shm_dataset.py", line 41, in setup_shm_lookup
    key = list(self.episode_lookup_dict.keys())[0]
IndexError: list index out of range
lukashermann commented 7 months ago

@DaniAffCH How big is your shared memory? It might be too small, see this issue: https://github.com/lukashermann/hulc/issues/8#issuecomment-1410095454

Does it work when you don't use the shared memory dataloader (by setting datamodule/datasets=vision_lang)? Did you try running the dataset task_D_D, which is ~4 times smaller than task_ABCD_D?

DaniAffCH commented 7 months ago

You are right, indeed everything works using without using the shm. Thanks for your support!

lukashermann commented 7 months ago

You're welcome, just note that not using the shm dataloader will slow down training times by a factor of 1.3 or 1.4