lean-dojo / ReProver

Retrieval-Augmented Theorem Provers for Lean
https://leandojo.org
MIT License
218 stars 48 forks source link

"ValueError: Tensors must be contiguous" When re-running experiments #72

Closed AG161 closed 2 weeks ago

AG161 commented 1 month ago

OS: Ubuntu 22.04.4

I followed the instructions from https://github.com/lean-dojo/ReProver?tab=readme-ov-file#requirements to create a ReProver environment and download the benchmark. I ran the following command (slightly adjusted from the readme due to smaller GPU):

python generation/main.py fit --config generation/confs/cli_ lean4_random.yaml --trainer.logger.name train_generator_random --trainer.logger.save_dir logs/t rain_generator_random --model.eval_num_workers 1 --data.batch_size 1 --data.num_workers 1 --tra iner.accumulate_grad_batches 8

Here are the logs:

(rp2) agittis@trurl:~$ cd RP/ReProver/
(rp2) agittis@trurl:~/RP/ReProver$ wandb login
wandb: Currently logged in as: agittis (ucsc-atp). Use `wandb login --relogin` to force relogin
(rp2) agittis@trurl:~/RP/ReProver$ python generation/main.py fit --config generation/confs/cli_
lean4_random.yaml --trainer.logger.name train_generator_random --trainer.logger.save_dir logs/t
rain_generator_random --model.eval_num_workers 1 --data.batch_size 1 --data.num_workers 1 --tra
iner.accumulate_grad_batches 8
2024-09-12 13:30:04.934 | DEBUG    | lean_dojo.data_extraction.lean:<module>:41 - Using GitHub
personal access token for authentication
[2024-09-12 13:30:05,520] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerat
or to cuda (auto detect)
2024-09-12 13:30:06.551 | INFO     | __main__:main:19 - PID: 150710
Seed set to 3407
/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/transformers/tokenization_utils_
base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `Tru
e` by default. This behavior will be depracted in transformers v4.45, and will be then set to `
False` by default. For more details check this issue: https://github.com/huggingface/transforme
rs/issues/31884
  warnings.warn(
2024-09-12 13:30:08.335 | INFO     | common:__init__:200 - Building the corpus from data/leando
jo_benchmark_4/corpus.jsonl
2024-09-12 13:30:44.123 | INFO     | generation.datamodule:__init__:147 - Without retrieval dat
a
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
initializing deepspeed distributed: GLOBAL_RANK: 0, MEMBER: 1/1
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for mor
e information.
wandb: Currently logged in as: agittis (ucsc-atp). Use `wandb login --relogin` to force relogin
wandb: WARNING Path logs/train_generator_random/wandb/ wasn't writable, using system temp direc
tory.
wandb: Tracking run with wandb version 0.18.0
wandb: Run data is saved locally in /tmp/wandb/run-20240912_133045-d1kyk02f
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run train_generator_random
wandb: ⭐️ View project at https://wandb.ai/ucsc-atp/lightning_logs
wandb: 🚀 View run at https://wandb.ai/ucsc-atp/lightning_logs/runs/d1kyk02f
100%|██████████████████████████████████████████████| 118517/118517 [00:00<00:00, 376349.77it/s]
2024-09-12 13:31:17.287 | INFO     | generation.datamodule:_load_data:60 - 250814 examples load
ed
100%|██████████████████████████████████████████████████| 2000/2000 [00:00<00:00, 299112.43it/s]
2024-09-12 13:31:17.380 | INFO     | generation.datamodule:_load_data:60 - 4260 examples loaded
Enabling DeepSpeed BF16. Model parameters and inputs will be cast to `bfloat16`.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
2024-09-12 13:31:17.515 | INFO     | common:get_optimizers:392 - Optimizing with FusedAdam
Using /home/agittis/.cache/torch_extensions/py311_cu121 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/agittis/.cache/torch_extensions/py311_cu121/fused_adam/build.ni
nja...
/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/torch/utils/cpp_extension.py:196
5: UserWarning: TORCH_CUDA_ARCH_LIST is not set, all archs for visible cards are included for c
ompilation.
If this is not desired, please set os.environ['TORCH_CUDA_ARCH_LIST'].
  warnings.warn(
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment va
riable MAX_JOBS=N)
ninja: no work to do.
Loading extension module fused_adam...
Time to load fused_adam op: 0.09805893898010254 seconds
Traceback (most recent call last):
  File "/home/agittis/RP/ReProver/generation/main.py", line 25, in <module>
    main()
  File "/home/agittis/RP/ReProver/generation/main.py", line 20, in main
    cli = CLI(RetrievalAugmentedGenerator, GeneratorDataModule)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/cli.py
", line 394, in __init__
    self._run_subcommand(self.subcommand)
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/cli.py
", line 701, in _run_subcommand
    fn(**fn_kwargs)
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/traine
r/trainer.py", line 538, in fit
    call._call_and_handle_interrupt(
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/traine
r/call.py", line 46, in _call_and_handle_interrupt
    return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/strate
gies/launchers/subprocess_script.py", line 105, in launch
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/traine
r/trainer.py", line 574, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/traine
r/trainer.py", line 957, in _run
    self.strategy.setup(self)
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/strate
gies/deepspeed.py", line 350, in setup
    self.init_deepspeed()
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/strate
gies/deepspeed.py", line 451, in init_deepspeed
    self._initialize_deepspeed_train(self.model)
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/strate
gies/deepspeed.py", line 487, in _initialize_deepspeed_train
    model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, scheduler)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightning/strate
gies/deepspeed.py", line 423, in _setup_model_and_optimizer
    deepspeed_engine, deepspeed_optimizer, _, _ = deepspeed.initialize(
                                                  ^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/__init__.py",
line 193, in initialize
    engine = DeepSpeedEngine(args=args,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/runtime/engine
.py", line 269, in __init__
    self._configure_distributed_model(model)
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/runtime/engine
.py", line 1201, in _configure_distributed_model
    self._broadcast_model()
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/runtime/engine
.py", line 1120, in _broadcast_model
    dist.broadcast(p.data, groups._get_broadcast_src_rank(), group=self.seq_data_parallel_group
)
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/comm/comm.py",
 line 117, in log_wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/comm/comm.py",
 line 224, in broadcast
    return cdb.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/torch/_dynamo/eval_frame
.py", line 600, in _fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/comm/torch.py"
, line 200, in broadcast
    return torch.distributed.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/torch/distributed/c10d_l
ogger.py", line 79, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/torch/distributed/distri
buted_c10d.py", line 2209, in broadcast
    work = group.broadcast([tensor], opts)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Tensors must be contiguous
[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/agittis/RP/ReProver/generation/main.py", line 25, in <module>
[rank0]:     main()
[rank0]:   File "/home/agittis/RP/ReProver/generation/main.py", line 20, in main
[rank0]:     cli = CLI(RetrievalAugmentedGenerator, GeneratorDataModule)
[rank0]:           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/cli.py", line 394, in __init__
[rank0]:     self._run_subcommand(self.subcommand)
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/cli.py", line 701, in _run_subcommand
[rank0]:     fn(**fn_kwargs)
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/trainer/trainer.py", line 538, in fit
[rank0]:     call._call_and_handle_interrupt(
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/trainer/call.py", line 46, in _call_and_handle_interrupt
[rank0]:     return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwar
gs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/strategies/launchers/subprocess_script.py", line 105, in launch
[rank0]:     return function(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/trainer/trainer.py", line 574, in _fit_impl
[rank0]:     self._run(model, ckpt_path=ckpt_path)
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/trainer/trainer.py", line 957, in _run
[rank0]:     self.strategy.setup(self)
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/strategies/deepspeed.py", line 350, in setup
[rank0]:     self.init_deepspeed()
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/strategies/deepspeed.py", line 451, in init_deepspeed
[rank0]:     self._initialize_deepspeed_train(self.model)
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/strategies/deepspeed.py", line 487, in _initialize_deepspeed_train
[rank0]:     model, deepspeed_optimizer = self._setup_model_and_optimizer(model, optimizer, sch
eduler)
[rank0]:                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/pytorch_lightni
ng/strategies/deepspeed.py", line 423, in _setup_model_and_optimizer
[rank0]:     deepspeed_engine, deepspeed_optimizer, _, _ = deepspeed.initialize(
[rank0]:                                                   ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/__ini
t__.py", line 193, in initialize
[rank0]:     engine = DeepSpeedEngine(args=args,
[rank0]:              ^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/runti
me/engine.py", line 269, in __init__
[rank0]:     self._configure_distributed_model(model)
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/runti
me/engine.py", line 1201, in _configure_distributed_model
[rank0]:     self._broadcast_model()
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/runti
me/engine.py", line 1120, in _broadcast_model
[rank0]:     dist.broadcast(p.data, groups._get_broadcast_src_rank(), group=self.seq_data_paral
lel_group)
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/comm/
comm.py", line 117, in log_wrapper
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/comm/
comm.py", line 224, in broadcast
[rank0]:     return cdb.broadcast(tensor=tensor, src=src, group=group, async_op=async_op)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/torch/_dynamo/e
val_frame.py", line 600, in _fn
[rank0]:     return fn(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/deepspeed/comm/
torch.py", line 200, in broadcast
[rank0]:     return torch.distributed.broadcast(tensor=tensor, src=src, group=group, async_op=a
sync_op)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/torch/distribut
ed/c10d_logger.py", line 79, in wrapper
[rank0]:     return func(*args, **kwargs)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^
[rank0]:   File "/home/agittis/miniconda3/envs/rp2/lib/python3.11/site-packages/torch/distribut
ed/distributed_c10d.py", line 2209, in broadcast
[rank0]:     work = group.broadcast([tensor], opts)
[rank0]:            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: ValueError: Tensors must be contiguous
wandb: 🚀 View run train_generator_random at: https://wandb.ai/ucsc-atp/lightning_logs/runs/d1k
yk02f
wandb: Find logs at: ../../../../tmp/wandb/run-20240912_133045-d1kyk02f/logs

Thanks!

yangky11 commented 3 weeks ago

https://github.com/lean-dojo/ReProver/issues/66 Does this work?

AG161 commented 2 weeks ago

Yes that fixes it, sorry I missed that previous issue. :sweat_smile: