Open PhilipHildebrand opened 1 year ago
I've added a figure that shows the time improvement for the partitioned heat equation tutorial (for ny = 9, 18, 36, 72 and 144 respectively). For ny = 72 and ny = 144, the version with the dirty fix in it still wasn't finished after several hours , so I've omitted those cases. timeEval-1.pdf
You can add the figure by directly dropping it in the comment while you are writing it. That would make the figure visible inside the comment. I am surprised to know that for ny = 72
or 144
, we already have saturation. Around a hundred nodes along the coupling interface is not a lot. Can you investigate on which part of the adapter is taking long? Is it the mask computation again?
Also, please make sure that all tests pass. Currently one tutorial seems to fail and there is also an error from the formatter.
You can add the figure by directly dropping it in the comment while you are writing it. That would make the figure visible inside the comment. I am surprised to know that for
ny = 72
or144
, we already have saturation. Around a hundred nodes along the coupling interface is not a lot. Can you investigate on which part of the adapter is taking long? Is it the mask computation again?
Actually, the adapter is slower all around, not just the mask computation. For reference, for ny = 72
, roughly 51% of the runtime are spent during the mask computation, for ny = 144
it is roughly 63%.
Actually, the adapter is slower all around, not just the mask computation. For reference, for
ny = 72
, roughly 51% of the runtime are spent during the mask computation, forny = 144
it is roughly 63%.
How much slower is the adapter as compared to the FEniCS adapter? This would be a good thing to find out. Also, is FEniCSx slower than FEniCS or is our adapter slower?
Actually, the adapter is slower all around, not just the mask computation. For reference, for
ny = 72
, roughly 51% of the runtime are spent during the mask computation, forny = 144
it is roughly 63%.How much slower is the adapter as compared to the FEniCS adapter? This would be a good thing to find out. Also, is FEniCSx slower than FEniCS or is our adapter slower?
I tested the FEniCS tutorial using the preCICE VM and it is performs way better for high ny
, at ny = 144
it doesn't exceed 12 s. However, I don't know whether it's because of our adapter or because FEniCSx is slower than FEniCS in general.
However, I don't know whether it's because of our adapter or because FEniCSx is slower than FEniCS in general.
It should be straightforward to find this out. Using functionality in Python to measure time, you can get how much time is being spent in adapter functions and also how much time the solver is taking. It would be a good insight to find out if the solvers are the issue or is it something on our side.
This PR is to move the mask computation into the initialization of the adapter. This also resolves the issue about the dirty fix.