KindXiaoming / pykan

Kolmogorov Arnold Networks
MIT License
14.94k stars 1.38k forks source link

I fixed all random seeds. But the result is still different #236

Closed yyugogogo closed 5 months ago

yyugogogo commented 5 months ago

Hi author, congrats on the fantastic work. But when I tried to reproduce your result in hellokan.ipynb, I found the results are different in different runtime though I've fixed all settings including random seed in create_dataset and KAN initialization. Btw, I output the result right after auto_symbolic function is called (as you suggested in another issue). I wonder what causes the result to be different from each other?

result2 result1

KindXiaoming commented 5 months ago

it looks like you are running in jupyter notebook, what about (1) you restart the notebook and then rerun (2) without restarting the notebook and rerun I'm more confident that you'll at least get (1) correct?

yyugogogo commented 5 months ago

Hi Xiaoming, thank you for your timely reply! I tried (1) and (2), but neither works, the results are still different. I'm confused about what causes the randomness in the result since I've fixed all random seeds (Maybe I missed out some)?. I've attached my code for your reference. Thanks! KAN.zip

it looks like you are running in jupyter notebook, what about (1) you restart the notebook and then rerun (2) without restarting the notebook and rerun I'm more confident that you'll at least get (1) correct?

KindXiaoming commented 5 months ago

Hi, I also remembered that LBFGS have some weird source of randomness. How about switching to Adam (you need train longer to make it work)? Also, using float64 instead of float32 may help.

yyugogogo commented 5 months ago

Hi, thanks for your help! I've fixed the problem by

  1. Using float64 by adding torch.set_default_dtype(torch.float64) in the beginning of my code (need to edit KANLayer.py to avoid TypeError)
  2. Replacing LBFGS with Adam
  3. Running in .py script instead of notebook

Thank you again for your help!