IntelLabs / MART

Modular Adversarial Robustness Toolkit
BSD 3-Clause "New" or "Revised" License
16 stars 0 forks source link

Adversarial training of using multiple GPUs #125

Closed chakri-rvk closed 1 year ago

chakri-rvk commented 1 year ago

I tried the following commands to use 2 GPUs python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 trainer=gpu fit=true datamodule.ims_per_batch=4 trainer=ddp trainer.devices=2

I also tried CUDA_VISIBLE_DEVICES=4,5 python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 trainer=gpu fit=true datamodule.ims_per_batch=4 trainer=ddp trainer.devices=2

And here is snippet of the COCO_TorchvisionFasterRCNN_Adv.yaml file image

For both these commands, I see the progress bar vary from 1 to 5 twice (updated once every 1.5 sec or so). After that, I do not see any progress. PFA a screenshot of the messages I see before the progress bar "freezes".

Also, once the progress bar stops to update, I am not able to terminate/interrupt the run using the typical Ctrl+C command (I had to kill the process based on pid).

Could you please help me fix this.

image

dxoigmn commented 1 year ago

What happens if you set datamodule.ims_per_batch=2? ims_per_batch is the global batch size irrespective of the number of devices. This value is divided by trainer.devices to get the number of images on each device.

dxoigmn commented 1 year ago

Also, can you put a code diff somewhere or, ideally, push your branch somewhere?

chakri-rvk commented 1 year ago

Here is the git diff output image

I just changed the COCO_TorchvisionFasterRCNN_Adv.yaml file and the changed the model from _fasterrcnn_resnet50fpn to _fasterrcnn_resnet50_fpnv2

I ran the command with datamodule.ims_per_batch=2 as you suggested and I encounter the same problem - the progress bar just freezes.

However, if the change the model back to _fasterrcnn_resnet50fpn, I see the iterations progress. However, after 50 batches, the adversarial training does not occur - the progress bar continues but the gain goes to nan (but this time, the progress bar does not freeze, and the I can terminate the run using Ctrl+C). However, with the same learning rate parameter and using single GPU, I do not encounter such issues (even after 300 batches).

dxoigmn commented 1 year ago

Can you run even run this on 1 GPU without DDP? Can you run it on 1 GPU with DDP?

dxoigmn commented 1 year ago

I was able to reproduce this issue locally.

@mzweilin: I'm not sure what is going on. I don't see this issue on #103. Also running CIFAR10_CNN_Adv with DDP on 2 GPUs fails:

$ CUDA_VISIBLE_DEVICES=2,3 python -m mart experiment=CIFAR10_CNN_Adv trainer=ddp fit=true trainer.devices=2 datamodule.world_size=2
...
Traceback (most recent call last):
  File "/home/ccorneli/Projects/MART/mart/__main__.py", line 56, in main
    metric_dict, _ = lightning(cfg)
  File "/home/ccorneli/Projects/MART/mart/utils/utils.py", line 58, in wrap
    raise ex
  File "/home/ccorneli/Projects/MART/mart/utils/utils.py", line 55, in wrap
    metric_dict, object_dict = task_func(cfg=cfg)
  File "/home/ccorneli/Projects/MART/mart/tasks/lightning.py", line 75, in lightning
    trainer.fit(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
    self._call_and_handle_interrupt(
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1217, in _run
    self.strategy.setup(self)
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 176, in setup
    self.configure_ddp()
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 299, in configure_ddp
    self.model = self._setup_model(LightningDistributedModule(self.model))
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 192, in _setup_model
    return DistributedDataParallel(module=model, device_ids=device_ids, **self._ddp_kwargs)
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 657, in __init__
    _sync_module_states(
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/torch/distributed/utils.py", line 134, in _sync_module_states
    module_states.append(buffer.detach())
  File "/home/ccorneli/Projects/MART/venv/lib/python3.9/site-packages/torch/nn/parameter.py", line 144, in __torch_function__
    raise ValueError(                                                                                                                                  ValueError: Attempted to use an uninitialized parameter in <method 'detach' of 'torch._C._TensorBase' objects>. This error happens when you are using a `LazyModule` or explicitly manipulating `torch.nn.parameter.UninitializedBuffer` objects. When using LazyModules Call `forward` with a dummy batch to
initialize the parameters before calling torch functions
chakri-rvk commented 1 year ago

Can you run even run this on 1 GPU without DDP? Can you run it on 1 GPU with DDP?

Yes. I can run both the combinations you suggested (I continued the runs for over 100 batches and then terminated). Here are the commands

CUDA_VISIBLE_DEVICES=1 python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 trainer=gpu fit=true

CUDA_VISIBLE_DEVICES=1 python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 trainer=gpu fit=true trainer=ddp trainer.devices=1

I also tried adding datamodule.ims_per_batch=4 and the following command works. CUDA_VISIBLE_DEVICES=1 python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 trainer=gpu fit=true datamodule.ims_per_batch=4 trainer=ddp trainer.devices=1

And for all these runs, the code base is same states in my previous post. PFA the diff status for reference (I re-ran the command) image

The moment I increase to GPUs, it falls apart. @dxoigmn Thank you for testing and reproducing the error on CIFAR10

dxoigmn commented 1 year ago

I think if you delete these lines: https://github.com/IntelLabs/MART/blob/eced15bdad18b6683190997590e2500a332b03e7/mart/attack/perturber/perturber.py#L51 https://github.com/IntelLabs/MART/blob/eced15bdad18b6683190997590e2500a332b03e7/mart/attack/perturber/perturber.py#L70

Everything should work correctly again.

chakri-rvk commented 1 year ago

@dxoigmn I commented the 2 lines you pointed, but I ran into AttributeError: 'Perturber' object has no attribute 'perturbation'. I ran the following command CUDA_VISIBLE_DEVICES=1,2 python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 trainer=gpu fit=true trainer=ddp trainer.devices=2

Find the log below

LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [1,2]
LOCAL_RANK: 1 - CUDA_VISIBLE_DEVICES: [1,2]

  | Name               | Type             | Params
--------------------------------------------------------
0 | model              | SequentialDict   | 43.7 M
1 | training_metrics   | MAP              | 0
2 | validation_metrics | MAP              | 0
3 | test_metrics       | MetricCollection | 0
--------------------------------------------------------
43.5 M    Trainable params
225 K     Non-trainable params
43.7 M    Total params
174.849   Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, val_dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 256 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  rank_zero_warn(
/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/data_connector.py:240: PossibleUserWarning: The dataloader, train_dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 256 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
  rank_zero_warn(
Epoch 0:   0%|                                                                                                                                                     | 0/15411 [00:00<?, ?it/s][2023-03-30 20:04:56,116][mart.utils.utils][ERROR] -
Traceback (most recent call last):
  File "/raid/vravilla3/MART_v2/MART/mart/utils/utils.py", line 55, in wrap
    metric_dict, object_dict = task_func(cfg=cfg)
  File "/raid/vravilla3/MART_v2/MART/mart/tasks/lightning.py", line 75, in lightning
    trainer.fit(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
    self._call_and_handle_interrupt(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt
    return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch
    return function(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run
    results = self._run_stage()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage
    return self._run_train()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train
    self.fit_loop.run()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 266, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance
    batch_output = self.batch_loop.run(batch, batch_idx)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance
    result = self._run_optimization(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step
    self.trainer._call_lightning_module_hook(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1646, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 286, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step
    return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step
    return optimizer.step(closure=closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/optimizer.py", line 113, in wrapper
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/sgd.py", line 125, in step
    loss = closure()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure
    closure_result = closure()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure
    step_output = self._step_fn()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step
    training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values())
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1765, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 349, in training_step
    return self.model(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/overrides/base.py", line 82, in forward
    output = self.module.training_step(*inputs, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/models/modular.py", line 103, in training_step
    output = self(input=input, target=target, model=self.model, step="training")
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/models/modular.py", line 95, in forward
    return self.model(**kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/nn/nn.py", line 115, in forward
    output = module(step=step, sequence=sequence, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 333, in forward
    self.attacker.fit(input=input, target=target, model=model, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 191, in fit
    self.on_run_start(adversary=self, input=input, target=target, model=model, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 151, in on_run_start
    super().on_run_start(
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 34, in on_run_start
    callback.on_run_start(**kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/perturber/batch.py", line 66, in on_run_start
    perturber.on_run_start(
  File "/raid/vravilla3/MART_v2/MART/mart/attack/perturber/perturber.py", line 74, in on_run_start
    self.perturbation.register_hook(self.gradient_modifier)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1207, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Perturber' object has no attribute 'perturbation'
[2023-03-30 20:04:56,118][mart.utils.utils][INFO] - Closing loggers...
Error executing job with overrides: ['experiment=COCO_TorchvisionFasterRCNN_Adv', 'task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1', 'trainer=gpu', 'fit=true', 'trainer=ddp', 'trainer.devices=2', 'datamodule.ims_per_batch=4']
Error executing job with overrides: ['experiment=COCO_TorchvisionFasterRCNN_Adv', 'task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1', 'trainer=gpu', 'fit=true', 'trainer=ddp', 'trainer.devices=2', 'datamodule.ims_per_batch=4']
Traceback (most recent call last):
  File "/raid/vravilla3/MART_v2/MART/mart/__main__.py", line 56, in main
    metric_dict, _ = lightning(cfg)
  File "/raid/vravilla3/MART_v2/MART/mart/utils/utils.py", line 58, in wrap
    raise ex
  File "/raid/vravilla3/MART_v2/MART/mart/utils/utils.py", line 55, in wrap
    metric_dict, object_dict = task_func(cfg=cfg)
  File "/raid/vravilla3/MART_v2/MART/mart/tasks/lightning.py", line 75, in lightning
    trainer.fit(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
    self._call_and_handle_interrupt(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run
    results = self._run_stage()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage
    return self._run_train()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train
    self.fit_loop.run()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 266, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance
    batch_output = self.batch_loop.run(batch, batch_idx)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance
    result = self._run_optimization(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step
    self.trainer._call_lightning_module_hook(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1646, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 286, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step
    return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step
    return optimizer.step(closure=closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/optimizer.py", line 113, in wrapper
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/sgd.py", line 125, in step
    loss = closure()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure
    closure_result = closure()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure
    step_output = self._step_fn()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step
    training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values())
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1765, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 349, in training_step
    return self.model(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/overrides/base.py", line 82, in forward
    output = self.module.training_step(*inputs, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/models/modular.py", line 103, in training_step
    output = self(input=input, target=target, model=self.model, step="training")
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/models/modular.py", line 95, in forward
    return self.model(**kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/nn/nn.py", line 115, in forward
    output = module(step=step, sequence=sequence, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 333, in forward
    self.attacker.fit(input=input, target=target, model=model, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 191, in fit
    self.on_run_start(adversary=self, input=input, target=target, model=model, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 151, in on_run_start
    super().on_run_start(
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 34, in on_run_start
    callback.on_run_start(**kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/perturber/batch.py", line 66, in on_run_start
    perturber.on_run_start(
  File "/raid/vravilla3/MART_v2/MART/mart/attack/perturber/perturber.py", line 74, in on_run_start
    self.perturbation.register_hook(self.gradient_modifier)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1207, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Perturber' object has no attribute 'perturbation'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Traceback (most recent call last):
  File "/raid/vravilla3/MART_v2/MART/mart/__main__.py", line 56, in main
    metric_dict, _ = lightning(cfg)
  File "/raid/vravilla3/MART_v2/MART/mart/utils/utils.py", line 58, in wrap
    raise ex
  File "/raid/vravilla3/MART_v2/MART/mart/utils/utils.py", line 55, in wrap
    metric_dict, object_dict = task_func(cfg=cfg)
  File "/raid/vravilla3/MART_v2/MART/mart/tasks/lightning.py", line 75, in lightning
    trainer.fit(model=model, datamodule=datamodule, ckpt_path=ckpt_path)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit
    self._call_and_handle_interrupt(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt
    return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch
    return function(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run
    results = self._run_stage()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage
    return self._run_train()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train
    self.fit_loop.run()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/fit_loop.py", line 266, in advance
    self._outputs = self.epoch_loop.run(self._data_fetcher)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance
    batch_output = self.batch_loop.run(batch, batch_idx)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance
    outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/base.py", line 204, in run
    self.advance(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance
    result = self._run_optimization(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization
    self._optimizer_step(optimizer, opt_idx, batch_idx, closure)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step
    self.trainer._call_lightning_module_hook(
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook
    output = fn(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1646, in optimizer_step
    optimizer.step(closure=optimizer_closure)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step
    step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 286, in optimizer_step
    optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step
    return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step
    return optimizer.step(closure=closure, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
    return wrapped(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/optimizer.py", line 113, in wrapper
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/optim/sgd.py", line 125, in step
    loss = closure()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure
    closure_result = closure()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__
    self._result = self.closure(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure
    step_output = self._step_fn()
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step
    training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values())
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1765, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/strategies/ddp.py", line 349, in training_step
    return self.model(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward
    output = self._run_ddp_forward(*inputs, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
    return module_to_run(*inputs[0], **kwargs[0])
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/pytorch_lightning/overrides/base.py", line 82, in forward
    output = self.module.training_step(*inputs, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/models/modular.py", line 103, in training_step
    output = self(input=input, target=target, model=self.model, step="training")
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/models/modular.py", line 95, in forward
    return self.model(**kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/nn/nn.py", line 115, in forward
    output = module(step=step, sequence=sequence, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 333, in forward
    self.attacker.fit(input=input, target=target, model=model, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/amp/autocast_mode.py", line 12, in decorate_autocast
    return func(*args, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 191, in fit
    self.on_run_start(adversary=self, input=input, target=target, model=model, **kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 151, in on_run_start
    super().on_run_start(
  File "/raid/vravilla3/MART_v2/MART/mart/attack/adversary.py", line 34, in on_run_start
    callback.on_run_start(**kwargs)
  File "/raid/vravilla3/MART_v2/MART/mart/attack/perturber/batch.py", line 66, in on_run_start
    perturber.on_run_start(
  File "/raid/vravilla3/MART_v2/MART/mart/attack/perturber/perturber.py", line 74, in on_run_start
    self.perturbation.register_hook(self.gradient_modifier)
  File "/nethome/vravilla3/miniconda3/envs/gardv3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1207, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Perturber' object has no attribute 'perturbation'

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Epoch 0:   0%|                                                                                                                                                     | 0/15411 [00:06<?, ?it/s]
dxoigmn commented 1 year ago

Add self.perturbation = None to the init func

chakri-rvk commented 1 year ago

@dxoigmn I added self.perturbation = None. It gave AttributeError: 'NoneType' object has no attribute 'register_hook' Here is the git diff image

Then, I tried adding self.gradientmodifier = None to the init func. I then got this error **AttributeError: 'NoneType' object has no attribute 'fill'** Here is the snippet of the error image

For both these runs, I used this command CUDA_VISIBLE_DEVICES=1,2 python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 trainer=gpu fit=true trainer=ddp trainer.devices=2 datamodule.ims_per_batch=4

chakri-rvk commented 1 year ago

@dxoigmn I tried the following changes (set self.perturbation = None in the init function, commented the two lines you mentioned and then added self..perturbation back in a later line) and I could get the code to run for 1 GPU. But for multiple GPU, I still face the same problem - the progress bar freezes after 1 batch. Based on this experiment, I think the two lines you suggested to comment out might not be the cause of this problem (cos even after that I am facing the same problem). What do you think?

PFA the changes I made to perturber.py file image

chakri-rvk commented 1 year ago

Update: With the changes (shown below) to the perturber.py file, I could get the runs going (for specific batch_sizes). @dxoigmn Does setting self.perturbation = perturbation make sense in terms of adv training? image

Here is the command used CUDA_VISIBLE_DEVICES=2,3 python -m mart experiment=COCO_TorchvisionFasterRCNN_Adv task_name=COCO_TorchvisionFasterRCNNv2_AdvTrain_Trial1 fit=true trainer=ddp trainer.devices=2 datamodule.ims_per_batch=6

However, if I reduced datamodule.ims_per_batch=4, I encounter the same "freeze in progress bar" problem when using 2 GPUs (I do not encounter this problem if trainer.devices=1)

Could you please help me fix this?