Open Mixpap opened 2 months ago
hi, that's indeed an interesting use case.
model.fix_symbolic(0,0,0,fun_name='x^2', fit_params_bool=False)
model.get_parameter('symbolic_fun.0.affine').data[0] = torch.tensor([0.0,0.0,-9.87,0.0]) #constant activation function
model.get_parameter('symbolic_fun.0.affine').requires_grad =False
if it still returns NAN, I'd appreciate more information (best if you show all the codes)
Hello, and thank you for this amazing work!
I believe that a valuable feature, particularly for interpretability and inverse problem cases, would be the ability to add “constants” as extra parameters to the model. This would allow us to incorporate known constants directly into the symbolic expression, rather than relying on the model to discover them through optimization.
For example, consider a dataset describing the position of a falling object over time (t, y). We want our KAN model to find the equation y(t) = y(0) - g t^2 . Using the current methodology as described in the examples, the model would likely be able to discover the exact symbolic expression. However, what if we already knew that g = 9.87 ? Instead of relying on the parameters of the layer functions to approximate this value, it would be helpful to fix or freeze an input variable containing this constant.
I attempted a workaround by fixing a symbolic activation function to a constant using the function '0' and forcing the affine parameters of the node as follows:
This forces all parameters in layer 0 to freeze, transferring the fitting process to subsequent layers. However, I encountered issues where the optimizer returns NaN values, and I couldn’t get this method to work.
Is there an alternative, perhaps simpler, way to incorporate known constants into the model? Or am I missing something in the implementation? Thanks!