RetroCirce / HTS-Audio-Transformer

The official code repo of "HTS-AT: A Hierarchical Token-Semantic Audio Transformer for Sound Classification and Detection"
https://arxiv.org/abs/2202.00874
MIT License
341 stars 62 forks source link

reporduce training on esc-50 has an error #41

Closed visionchan closed 1 year ago

visionchan commented 1 year ago

My CUDA version is 11.1, Pytorch version is 1.9.0, Pytorch-lighting version is 1.6.0.

When I reproduce your HTS-Transformer on esc-50, I get this error, could you help me to solve this error?

Details

Traceback (most recent call last): File "main.py", line 432, in main() File "main.py", line 428, in main train() File "main.py", line 398, in train trainer.fit(model, audioset_data) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit self._call_and_handle_interrupt( File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 724, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 812, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1237, in _run results = self._run_stage() File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1324, in _run_stage return self._run_train() File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1354, in _run_train self.fit_loop.run() File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 269, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance batch_output = self.batch_loop.run(batch, batch_idx) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance result = self._run_optimization( File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization self._optimizer_step(optimizer, opt_idx, batch_idx, closure) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step self.trainer._call_lightning_module_hook( File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1596, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1625, in optimizer_step optimizer.step(closure=optimizer_closure) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/torch/optim/adamw.py", line 65, in step loss = closure() File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure closure_result = closure() File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__ self._result = self.closure(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 143, in closure self._backward_fn(step_output.closure_loss) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 311, in backward_fn self.trainer._call_strategy_hook("backward", loss, optimizer, opt_idx) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1766, in _call_strategy_hook output = fn(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 168, in backward self.precision_plugin.backward(self.lightning_module, closure_loss, *args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 80, in backward model.backward(closure_loss, optimizer, *args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1370, in backward loss.backward(*args, **kwargs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/data/chenyuanjian/anaconda3/envs/htstrans/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward Variable._execution_engine.run_backward( RuntimeError: upsample_bicubic2d_backward_out_cuda does not have a deterministic implementation, but you set 'torch.use_deterministic_algorithms(True)'. You can turn off determinism just for this operation if that's acceptable for your application. You can also file an issue at https://github.com/pytorch/pytorch/issues to help us prioritize adding deterministic support for this operation.

ammerser commented 1 year ago

I have the same question. Have you solved it?

visionchan commented 1 year ago

Sure, please refer issue 13.

ammerser commented 1 year ago

Thanks for your reply, but I got a new problem "corrupted size vs. prev_size 已放弃 (核心已转储)"。Have you encountered this problem?