Open 911569318 opened 1 month ago
Hi 911569318,
I am not entirely sure what happened. My first guess would be that the pruning destroyed too much of the network making everything fall apart. Calling
model(dataset['train_input'])
model.plot()
plt.show()# In case you run outside an interactive plt backend like jupyter notebook
after pruning could give additional information. Or a bit more explicit:
print(model.mask_up)
print(model.mask_down)
for layer in model.act_fun:
print(layer.mask, "\n")
Or use a debugger to avoid the prints.
The current implementation of MultKAN sets the given seed during initialization as the global seed for numpy, torch and random and thereby resets the rng each time a KAN is initialized. Therefore, when you call random.randint(0, 1000000)
after any initialization of a KAN that was followowed by deterministic steps, only, the result will always be the same.
This is actually a small oversight in the KAN implementation, instead any KAN instantiation should use its own rng. I will write a specific issue about that and link it here.
In the meantime you can use the following workaround:
# in the beginning of your script
rng = np.random.default_rng(initial_seed)
# whenever you need a random number, as a seed itself or for any other case
seed = rng.randint(0, 1000000)
Hope that helps, Leonard
I'm using pykan version 0.2.1 and here are the issues I ran into:
I want to train multiple models at the same time, with different seeds of random numbers, so that the best model is selected. However, the code I defined produced two different types of errors at runtime. Here's my code
When I tried to run the train_multiple_models method, two different things happened.
This has some trouble, please tell me how to solve it. Thank you.