KindXiaoming / pykan

Kolmogorov Arnold Networks
MIT License
15.1k stars 1.4k forks source link

Problems when using Adam and LBFGS optimizers(在使用Adam和LBFGS优化器时候问题) #456

Closed glotm closed 2 months ago

glotm commented 2 months ago

My Data


import torch
from torch import nn
import csv
import numpy as np
#%%
t=1000
ti=torch.arange(t)
x1=torch.exp(torch.sin(ti))
x1+=torch.normal(0,0.01,ti.shape)

x2=torch.exp(torch.cos(ti))
x2+=torch.normal(0,0.01,ti.shape)

x3=torch.dropout(torch.normal(0,1,ti.shape),0.4,True)
x4=torch.dropout(2*torch.normal(0,0.1,ti.shape),0.3,True)

# print(x3)

features=torch.stack((ti,x1,x2,x3,x4),1)
labels=torch.exp(0.1*features[:,1]+0.12*features[:,2]+torch.exp(0.2*features[:,3]+0.5*features[:,4]))
print(features,labels)

data=torch.concat((features,labels.reshape(-1,1)),1)
print(data)

np.savetxt('test_data.csv',data,delimiter=',',fmt='%.6f')

It can be seen that this is a regression problem with only four inputs.

Problem

My KAN network is defined as

model=KAN([dataset['train_input'].shape[1],8,1],grid=3,k=3,device=torch.device('cpu'),seed=42)

when I train my net by Adam optimizer,the code is,

model.fit(dataset,opt="Adam",steps=50,loss_fn=loss_fn,lamb=0.001,lr=0.001)

image

but,when I use the LBFGS optimizer,

model.fit(dataset,opt="LBFGS",steps=50,loss_fn=loss_fn,lamb=0.001,lr=0.001)

image

I would like to ask what caused such a big gap?

ps.I used MLP as a comparison image