Open diego-uva opened 4 months ago
Good afternoon,
I have started using the KAN library and I got a couple of errors running the code from the https://kindxiaoming.github.io/pykan/intro.html#hello-kan manual page.
The first is that the manual has not yet been updated and the train method is used instead of fit.
The second one was when executing this code from the manual page:
from kan import * # create a KAN: 2D inputs, 1D output, and 5 hidden neurons. cubic spline (k=3), 5 grid intervals (grid=5). model = KAN(width=[2,5,1], grid=5, k=3, seed=0) # create dataset f(x,y) = exp(sin(pi*x)+y^2) f = lambda x: torch.exp(torch.sin(torch.pi*x[:,[0]]) + x[:,[1]]**2) dataset = create_dataset(f, n_var=2) dataset['train_input'].shape, dataset['train_label'].shape # plot KAN at initialization model(dataset['train_input']); model.plot(beta=100) # train the model model.fit(dataset, opt="LBFGS", steps=20, lamb=0.01, lamb_entropy=10.) #fit instead train model.plot() model.prune() #The error in this line! model.plot(mask=True)
The error is:
"name": "AttributeError", "message": "'float' object has no attribute 'to'", "stack": "--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[13], line 1 ----> 1 model.prune() 2 model.plot(mask=True) File c:\\Users\\hp\\.conda\\envs\\kan\\lib\\site-packages\\kan\\MultKAN.py:970, in MultKAN.prune(self, node_th, edge_th) 969 def prune(self, node_th=1e-2, edge_th=3e-2): --> 970 self = self.prune_node(node_th, log_history=False) 971 #self.prune_node(node_th, log_history=False) 972 self.forward(self.cache_data) File c:\\Users\\hp\\.conda\\envs\\kan\\lib\\site-packages\\kan\\MultKAN.py:942, in MultKAN.prune_node(self, threshold, mode, active_neurons_id, log_history) 938 model2.symbolic_fun[i].out_dim_mult = num_mult 940 width_new.append([num_sum, num_mult]) --> 942 model2.act_fun[i] = model2.act_fun[i].get_subset(active_neurons_up[i], active_neurons_down[i]) 943 model2.symbolic_fun[i] = self.symbolic_fun[i].get_subset(active_neurons_up[i], active_neurons_down[i]) 945 model2.cache_data = self.cache_data File c:\\Users\\hp\\.conda\\envs\\kan\\lib\\site-packages\\kan\\KANLayer.py:305, in KANLayer.get_subset(self, in_id, out_id) 283 def get_subset(self, in_id, out_id): 284 ''' 285 get a smaller KANLayer from a larger KANLayer (used for pruning) 286 (...) 303 (2, 3) 304 ''' --> 305 spb = KANLayer(len(in_id), len(out_id), self.num, self.k, base_fun=self.base_fun, device=self.device) 306 spb.grid.data = self.grid[in_id] 307 spb.coef.data = self.coef[in_id][:,out_id] File c:\\Users\\hp\\.conda\\envs\\kan\\lib\\site-packages\\kan\\KANLayer.py:132, in KANLayer.__init__(self, in_dim, out_dim, num, k, noise_scale, scale_base, scale_sp, base_fun, grid_eps, grid_range, sp_trainable, sb_trainable, save_plot_data, device, sparse_init) 129 else: 130 mask = 1. --> 132 scale_base = scale_base.to(device) 133 self.scale_base = torch.nn.Parameter(torch.ones(in_dim, out_dim, device=device) * scale_base * mask).requires_grad_(sb_trainable) # make scale trainable 134 #else: 135 #self.scale_base = torch.nn.Parameter(scale_base.to(device)).requires_grad_(sb_trainable) AttributeError: 'float' object has no attribute 'to'"
Best regards.
Diego. Just comment out the erroneous line in KANLayer.py, and it will work
Hi, please find the most up-to-date hellokan here: https://github.com/KindXiaoming/pykan/blob/master/hellokan.ipynb
model.prune()
should be model = model.prune()
Thanks for the reply! Unfortunately, it does not help much. I'm facing the analogous problem with the latest hello-kan:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-11-d30a6490a38c>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = model.prune()
2 model.plot()
3 frames
[/content/pykan/kan/MultKAN.py](https://localhost:8080/#) in prune(self, node_th, edge_th)
968
969 def prune(self, node_th=1e-2, edge_th=3e-2):
--> 970 self = self.prune_node(node_th, log_history=False)
971 #self.prune_node(node_th, log_history=False)
972 self.forward(self.cache_data)
[/content/pykan/kan/MultKAN.py](https://localhost:8080/#) in prune_node(self, threshold, mode, active_neurons_id, log_history)
940 width_new.append([num_sum, num_mult])
941
--> 942 model2.act_fun[i] = model2.act_fun[i].get_subset(active_neurons_up[i], active_neurons_down[i])
943 model2.symbolic_fun[i] = self.symbolic_fun[i].get_subset(active_neurons_up[i], active_neurons_down[i])
944
[/content/pykan/kan/KANLayer.py](https://localhost:8080/#) in get_subset(self, in_id, out_id)
303 (2, 3)
304 '''
--> 305 spb = KANLayer(len(in_id), len(out_id), self.num, self.k, base_fun=self.base_fun, device=self.device)
306 spb.grid.data = self.grid[in_id]
307 spb.coef.data = self.coef[in_id][:,out_id]
[/content/pykan/kan/KANLayer.py](https://localhost:8080/#) in __init__(self, in_dim, out_dim, num, k, noise_scale, scale_base, scale_sp, base_fun, grid_eps, grid_range, sp_trainable, sb_trainable, save_plot_data, device, sparse_init)
130 mask = 1.
131
--> 132 scale_base = scale_base.to(device)
133 self.scale_base = torch.nn.Parameter(torch.ones(in_dim, out_dim, device=device) * scale_base * mask).requires_grad_(sb_trainable) # make scale trainable
134 #else:
AttributeError: 'float' object has no attribute 'to'
PS 1) also tried different version: # v0.2.1, v0.1.2, the problem repeats 2) a small suggestion: maybe a lot of these compatibility issues will vanish if You can provide a compilable google - colab hellokan, like the one here: https://colab.research.google.com/drive/1YOU7AifdYieMWK2hDfKjlN7l6_n6BkvV?usp=sharing
line 132 'scale_base = scale_base.to(device)' is the problem, please comment it out. This is already updated in the github repo (so you can also pull the latest github repo), but not on pypi (thanks for your note, I just realized this).
Good afternoon,
I have started using the KAN library and I got a couple of errors running the code from the https://kindxiaoming.github.io/pykan/intro.html#hello-kan manual page.
The first is that the manual has not yet been updated and the train method is used instead of fit.
The second one was when executing this code from the manual page:
The error is:
Best regards.
Diego.