Open jink1994 opened 4 weeks ago
To visualize internal 1D functions, we need intermediate activations, which are defined on inputs. If a 1D function is sin(x) globally, but inputs are all close to zero, the visualization will need to reveal that this is nearly a linear function for the range of interest. It appears more like a feature to me, but one may be able to find a way to get rid of the need to pass inputs.
@KindXiaoming I'm wondering whether the symbolic representation of 1D function where input is passed and one of function where input is NOT passed is different of not, can I ask your opinion?
Thanks for the quick response, Xiaoming! To
@KindXiaoming I'm wondering whether the symbolic representation of 1D function where input is passed and one of function where input is NOT passed is different of not, can I ask your opinion?
I can answer this:
No they are not. Like what XIaoming said above, given a different range of interests, the 1D function can be very different in visualizations. This is partially due to that splines enables local updates, which means that the symbolic representation for a local domain can be different globally.
As in the title.
I want to optimize the code a bit but not sure if there are other reasons for which the authors do it like this. After all, when the training is done, shouldn't you be able to visualize the network based on parameters directly?
Looking forward to your thought, thanks!