Closed EdgeLLM closed 4 years ago
While the loss value is quite high, this is mostly artificially inflated due to the loss function being computed as a sum of the values of the penalty functions for each sampled point. We also multiply by a multiple of the batch size to equal out the loss values between the inhomogeneous and homogeneous regions of the Helmholtz equation (i.e., where the source function is zero or not). The loss values would be significantly lower if the mean were computed instead of the sum, for example.
Nevertheless, you should see be able to verify that the loss curve converges and the wavefield converges to something that looks quite reasonable.
This is what the real and imaginary parts of the wavefield should look like after 50k iterations, as shown in the Tensorboard summary.
After running train_helmholtz.py script, I got 'train_losses_final.txt', but the losses are still very large even after 50000 epochs. Does this mean that the training has failed? But I did't change anything in the train_helmholtz.py script.