X-LANCE / text2sql-lgesql

[ACL 2021] This is the project containing source codes and pre-trained models about ACL2021 Long Paper ``LGESQL: Line Graph Enhanced Text-to-SQL Model with Mixed Local and Non-Local Relations".
https://arxiv.org/abs/2106.01093
Apache License 2.0
147 stars 37 forks source link

The program crashes when I use the argument "--load_optimizer" #18

Open nixonjin opened 2 years ago

nixonjin commented 2 years ago

I wanted to continue training the model with the saved optimizer, but it crashed. The traceback is shown as follows: Traceback (most recent call last): File "lgesql/text2sql.py", line 105, in optimizer.step() File "lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper return wrapped(*args, *kwargs) File "lib/python3.6/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(args, **kwargs) File "lgesql/utils/optimization.py", line 220, in step expavg.mul(beta1).add_(grad, alpha=1.0 - beta1) RuntimeError: The size of tensor a (768) must match the size of tensor b (2) at non-singleton dimension 0

Have you met this problem? And how can I fix it?

nixonjin commented 2 years ago

More information:
When I comment the code "optimizer.load_state_dict(check_point['optim'])", the program will not crash but the training loss will be much larger than the loss in the last epoch of the saved model.

rhythmcao commented 2 years ago

Thanks a lot for pointing out this bug.

We also find this problem when loading from checkpoints. Honestly, we never used this interface for training from checkpoints in our experiments and neglected this bug by accident. The problem is caused by mismatches about key-value pairs in self.state of the optimizer. And the cause is that the set() operations over the parameters in function set_optimizer lead to different orders when invoked in different runs. Thus, the self.state mappings in load_state_dict for the optimizer fails. (See load_state_dict in Pytorch Optimizer for more details)

We have fixed this bug by removing all set() operations in function set_optimizer in utils/optimization.py. And everything seems ok if you now train from scratch and load from checkpoints.

Thanks again for pointing out this problem.