alexfrom0815 / Online-3D-BPP-DRL

This repository contains the implementation of paper Online 3D Bin Packing with Constrained Deep Reinforcement Learning.
289 stars 66 forks source link

RuntimeError: symeig_cuda: the algorithm failed to converge #6

Closed nimisha-stellosys closed 2 years ago

nimisha-stellosys commented 2 years ago

Traceback (most recent call last): File "main.py", line 258, in main(args) File "main.py", line 42, in main train_model() File "main.py", line 209, in train_model value_loss, action_loss, dist_entropy, prob_loss, graph_loss = agent.update(rollouts) File "D:\Online-3D-BPP-DRL-main\acktr\algo\acktr_pipeline.py", line 98, in update self.optimizer.step() File "C:\Users\Sty\anaconda3\envs\TF-GPU\lib\site-packages\torch\optim\optimizer.py", line 88, in wrapper return func(*args, **kwargs) File "D:\Online-3D-BPP-DRL-main\acktr\algo\kfac.py", line 215, in step self.d_a[m], self.Q_a[m] = torch.symeig( RuntimeError: symeig_cuda: the algorithm failed to converge; 1001 off-diagonal elements of an intermediate tridiagonal form did not converge to zero.

alexfrom0815 commented 2 years ago

The reason for this error is that the ACKTR implementation we borrowed from https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail is not robust enough. Three suggestions to solve this problem:

  1. Increase the value of the mask (in config.py) during training.
  2. Modify the hyperparameters in the algorithm so that the training algorithm can avoid this convergence error.
  3. This convergence problem only appears in the implementation of ACKTR, you can try to change the training algorithm to a2c or other reinforcement learning algorithms.
nimisha-stellosys commented 2 years ago

Thank you I will try with these parameters.

ghost commented 2 years ago

The reason for this error is that the ACKTR implementation we borrowed from https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail is not robust enough. Three suggestions to solve this problem:

  1. Increase the value of the mask (in config.py) during training.
  2. Modify the hyperparameters in the algorithm so that the training algorithm can avoid this convergence error.
  3. This convergence problem only appears in the implementation of ACKTR, you can try to change the training algorithm to a2c or other reinforcement learning algorithms.

Hi @alexfrom0815 If I use a2c instead of ACKTR will there be any performance downside? Or its ok to use it

alexfrom0815 commented 2 years ago

The reason for this error is that the ACKTR implementation we borrowed from https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail is not robust enough. Three suggestions to solve this problem:

  1. Increase the value of the mask (in config.py) during training.
  2. Modify the hyperparameters in the algorithm so that the training algorithm can avoid this convergence error.
  3. This convergence problem only appears in the implementation of ACKTR, you can try to change the training algorithm to a2c or other reinforcement learning algorithms.

Hi @alexfrom0815 If I use a2c instead of ACKTR will there be any performance downside? Or its ok to use it

Unfortunately, the a2c algorithm is indeed much worse than ACKTR. We are currently looking for better alternatives to the ACKTR algorithm. We encourage you to try some other algorithmic hyperparameters to avoid this ACKTR error (random seed, mask value, etc.).

Another tip is that although we cannot determine the cause of this error, from experience, when the error appears, the performance of the algorithm is generally close to the highest value, so its existence may not affect the final effect of the algorithm in a certain sense.

In addition, we have completed the implementation of a new online BPP algorithm, which is more stable than the current algorithm and can work at arbitrary resolution. Due to the double-blind protocol of the paper review, we will publish the new algorithm code after our new work is accepted.

suoyike1 commented 2 months ago

The reason for this error is that the ACKTR implementation we borrowed from https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail is not robust enough. Three suggestions to solve this problem:

1. Increase the value of the mask (in config.py) during training.

2. Modify the hyperparameters in the algorithm so that the training algorithm can avoid this convergence error.

3. This convergence problem only appears in the implementation of ACKTR, you can try to change the training algorithm to a2c or other reinforcement learning algorithms.

where is config.py? I did not find it.