Hi, thank you so much for your work! I am wondering about the generalization ability of such PINN? I tried with a simple sine function, when training and testing within the range: [0,2pi], both training loss and validation loss are good. However, when I feed the network with a new set of x, ranges from [2pi, 4pi], then the prediction looks bad. Is it because that the network neve sees such number? I feel like it is memorizing the distribution of things it has been trained on, but not generalized to unseen floats?
Hi, thank you so much for your work! I am wondering about the generalization ability of such PINN? I tried with a simple sine function, when training and testing within the range: [0,2pi], both training loss and validation loss are good. However, when I feed the network with a new set of x, ranges from [2pi, 4pi], then the prediction looks bad. Is it because that the network neve sees such number? I feel like it is memorizing the distribution of things it has been trained on, but not generalized to unseen floats?