Closed glotm closed 2 months ago
import torch from torch import nn import csv import numpy as np #%% t=1000 ti=torch.arange(t) x1=torch.exp(torch.sin(ti)) x1+=torch.normal(0,0.01,ti.shape) x2=torch.exp(torch.cos(ti)) x2+=torch.normal(0,0.01,ti.shape) x3=torch.dropout(torch.normal(0,1,ti.shape),0.4,True) x4=torch.dropout(2*torch.normal(0,0.1,ti.shape),0.3,True) # print(x3) features=torch.stack((ti,x1,x2,x3,x4),1) labels=torch.exp(0.1*features[:,1]+0.12*features[:,2]+torch.exp(0.2*features[:,3]+0.5*features[:,4])) print(features,labels) data=torch.concat((features,labels.reshape(-1,1)),1) print(data) np.savetxt('test_data.csv',data,delimiter=',',fmt='%.6f')
It can be seen that this is a regression problem with only four inputs.
My KAN network is defined as
model=KAN([dataset['train_input'].shape[1],8,1],grid=3,k=3,device=torch.device('cpu'),seed=42)
when I train my net by Adam optimizer,the code is,
model.fit(dataset,opt="Adam",steps=50,loss_fn=loss_fn,lamb=0.001,lr=0.001)
but,when I use the LBFGS optimizer,
model.fit(dataset,opt="LBFGS",steps=50,loss_fn=loss_fn,lamb=0.001,lr=0.001)
I would like to ask what caused such a big gap?
ps.I used MLP as a comparison
My Data
It can be seen that this is a regression problem with only four inputs.
Problem
My KAN network is defined as
when I train my net by Adam optimizer,the code is,
but,when I use the LBFGS optimizer,
I would like to ask what caused such a big gap?
ps.I used MLP as a comparison