Closed caxelrud closed 2 months ago
Hey @caxelrud, that's an interesting question. The Lipschitz-bounded networks in this package will allow you to bound the sensitivity of the network around every point. If you're just interested in bounding the gain around a particular operating point, this might not be exactly what you want. Do you mind sharing some more details of your application please?
Using a tanh
or something similar about the point would definitely smooth the network response around that point. Another option might be Jacobian regularisation.
The problem is related to models that are used for feedback controllers (basically use the inverse of the model). For neural network modeling, due to data problems and some activation functions (any model will get flat at the borders of the training domain), the model may gets unrealistically small, that generates huge control action. Also, in general, the model should not change signal, otherwise you get positive feedback. In other words, we are trying to impose previous knowledge to constraint the model.
@caxelrud let me know if you'd like to continue the discussion elsewhere (eg: feel free to email me). I'm closing this issue since there's no changes required to the package.
What are the best practice to constraint the function sensitivity (gain) around a point? I thought about transforming the function using tangent or logit function around the desired point.