KindXiaoming / pykan

Kolmogorov Arnold Networks
MIT License
15.02k stars 1.39k forks source link

Fixing learned weight for continuous learning #49

Open guyko81 opened 6 months ago

guyko81 commented 6 months ago

I'm building a model by iteratively adding new input variables. It would be brilliant to continue teaching a model by inheriting from another one (by simply just creating a copy of it, like model2 = model) and adding new nodes and/or edges. Then another functionality of fixing an edge's weight function. Something like this:

model = KAN(width=[1,1], grid=6, k=3) model.train(dataset, opt="Adam", lr=0.01, steps=1000)

model2 = model.copy() model2.add_node(0,1) # this would automatically add edges between the new node and all the nodes in the next layer) model2.fix_edge(0,0,0) # to keep the previously learned weight function model2.train(dataset2, opt="Adam", lr=0.01, steps=200)

use case: in time series analysis the DayOfWeek is a very strong variable, would be great to first just find the function that describes it and then force the model to find some additive change necessary on top of the found DayOfWeek pattern - like adding holiday effect, or adding trend.

2017wxyzwxyz commented 6 months ago

@guyko81 May I ask, based on your previous experience, has KAN significantly improved the prediction accuracy of time series data compared to other models such as LSTM?