Closed ZeyuSun closed 1 year ago
For TL1, would you elaborate on how d=1541 output sensors are chosen for TL1 source data? I haven't been able to locate where you implemented the discretization in the data generating or training code. I'm also not sure of the implementation from the paper where you mentioned "The source simulation box is a square domain Ω = [0; 1] × [0; 1], discretized with d = 1541 grid points."
Instead of d = 1541, I used 100x100 Cartesian grid point and got relative L2 error 0.079 for TL1 source, which is not as good as 0.0136 as reported in Table 1. I'm hoping you could enlighten me where I should do differently to reproduce the results. Thank you!
We have updated the data generation scripts. Now, the data generated can be directly used for training the networks. The square domain is solved using an unstructured grid with 1541 points. In the previous script, we interpolated the solution on a lattice grid for plotting purposes.
I was trying to run TL1/source_model.py but I met an error at this line: https://github.com/katiana22/TL-DeepONet/blob/45d0cdbf6374a20378e637273213f4e19bd571ee/TL1/dataset.py#L71
The error says:
This is because
u_train
has shape (2000, 100, 100), but I guess it's supposed to be (2000, 1541), where 1541 is the number of output sensors.The data I used is generated here: https://github.com/katiana22/TL-DeepONet/blob/45d0cdbf6374a20378e637273213f4e19bd571ee/data_generation/Darcy_geometry/Darcy_square.m#L57-L69 where it seems indeed is interpolating the unknown functions at 100 x 100 grid.