Closed e-tuanzi closed 5 months ago
Retraining after symbolic regression can indeed lead to errors sometimes, particularly because of this part of the code:
mode = "auto" # "manual"
if mode == "manual":
# manual mode
model.fix_symbolic(0,0,0,'sin');
model.fix_symbolic(0,1,0,'x^2');
model.fix_symbolic(1,0,0,'exp');
elif mode == "auto":
# automatic mode
lib = ['x','x^2','x^3','x^4','exp','log','sqrt','tanh','sin','abs']
model.auto_symbolic(lib=lib)
In your case, the logarithm is not defined for inputs that are less than or equal to zero. That’s why switching from LBFGS to Adam sometimes helps.
Remember to clear all your outputs after changing your optimizer and retraining from scratch.
@AntonioTepsich Thank you very much for solving my question.
I also found a reason why repeating a dozen runs is always the same result.
jupyter notebook running repeatedly if you choose to restart the kernel running mode will always cause the same result. This may have something to do with randomly initialized variables.
jupyter can be run from scratch without restarting the kernel.
When I run hellokan.ipynb, a runtime error occurs due to Nan. I didn't change any Settings, just ran it, repeated it 10 times and still got a runtime error.
These functions do this every time, and I don't know why the following three functions are found in hellokan.ipynb.
After that, A nan runtime error occurred during the last training.
train loss: nan | test loss: nan | reg: nan : 10%|█▊ | 5/50 [00:00<00:07, 5.86it/s]
When I set the last training optimizer to Adam, it worked fine.
model.train(dataset, opt="Adam", steps=50);
Does anyone else have the same problem?Or can someone tell me why this happened? Thanks a lot!