flexcompute / tidy3d

Fast electromagnetic solver (FDTD) at scale.
https://docs.flexcompute.com/projects/tidy3d/en/latest/
GNU Lesser General Public License v2.1
175 stars 40 forks source link

Mode solver API 3.0 demo #1523

Open daquinteroflex opened 6 months ago

daquinteroflex commented 6 months ago

Explore a mode solver API structure based on:

Goals:

tylerflex commented 4 months ago

Some notes on how the mode solver might look in 3.0

# all of the physical information
scene = Scene(...)

# in 3.0, make a simulation, evaluated on the plane
mode_solver = Simulation.from_scene(geometry=plane)

# add a mode solver monitor (containing the mode spec?)
mode_solver.updated_copy(monitors=[mode_solver_mnt])

mode_data = web.run(mode_solver_sim)

mode_source = mode_solver.to_source(...)

fdtd_sim = td.Simulation.from_scene(scene, sources=[mode_source])
momchil-flex commented 4 months ago

I think one thing we should decide on even now is whether we want simulations to return some data by default, without monitors. We need to think about this already, since this is already the case in the EME solver, which always stores the EME expansion coefficients. Users can also add monitors if they want to explicitly store extra data like the fields, modes, etc.

For the mode monitor, what would make the most sense to me right now is a refactor where ModeSolver() just no longer takes in a simulation but rather a scene plus the rest of the arguments it currently takes. Similarly to EME, you wouldn't need to pass a monitor to store the modes as that's the fundamental purpose of the ModeSolver, but we could open it up to pass some monitors for extra data, e.g. a PermittivityMonitor. But yeah I think as in EME, having some data internal to the mode solver (always returned) makes sense and simplifies the workflow.

Looking at your 3.0 suggestion, I am actually starting to wonder about the convenience of the unified Simulation approach. It seems like it could be confusing to not have a simple way to understand what kind of solver will be running. That is to say, it seems like just adding the mode source converts the simulation from MODE to FDTD. Say that I import the fdtd_sim from a file but only want to run the mode solver. One way would be to remove the source, but this kind of handling seems quite abstruse to me. I guess another way is to recreate a new Simulation and only put in the elements that I need for the MODE solve. But yeah generally the parsing of solvers to run based on added components seems like making a lot of things implicitly defined in a way that may not be clear to the user how to control...

Or let's say the user wants to do an EME simulation. I guess we're meant to parse that based on the presence of some eme specific arguments like eme_grid_spec? But I'm worried that this kind of handling could also become quite a validation nightmare. Like, currently we have various validators specific to different solvers. In the unified handling, when validating, we kinda need to figure out which solver exactly we should be validating for. And if the user passes something wrong, like incompatible components that make it unclear what solver we're going to solve for - or maybe unknown to the user, two separate solvers will be initialized without their intent - the error messages could become abstruse?

tylerflex commented 4 months ago

Looking at your 3.0 suggestion, I am actually starting to wonder about the convenience of the unified Simulation approach. It seems like it could be confusing to not have a simple way to understand what kind of solver will be running. That is to say, it seems like just adding the mode source converts the simulation from MODE to FDTD. Say that I import the fdtd_sim from a file but only want to run the mode solver. One way would be to remove the source, but this kind of handling seems quite abstruse to me. I guess another way is to recreate a new Simulation and only put in the elements that I need for the MODE solve. But yeah generally the parsing of solvers to run based on added components seems like making a lot of things implicitly defined in a way that may not be clear to the user how to control...

Yea this was actually my impression once I wrote it out. I saw this issue and decided to just see what that would look like with a unified Simulation but it doesn't seem straightforward. It's almost enough for me to think we shouldn't try this (maybe unless the solvers are computing a similar set of data?). Maybe one way to make things more explicit is to provide web API calls to run specific solvers? eg run(sim, solver="mode")?

For the mode monitor, what would make the most sense to me right now is a refactor where ModeSolver() just no longer takes in a simulation but rather a scene plus the rest of the arguments it currently takes.

Yes I agree with the scene part. At least we can probably agree all of the solvers should be built around some kind of "scene" dataset + other, solver specific, stuff.

As far as solvers always returning some core data, I think that makes sense too. Maybe this can just be stored in the respective SimulationData containers. (or if we go unified simulation, then the SimulationData can have some fields storing, eg. the EME coefficient data (if EME is part of the simulation) or other things).

Or let's say the user wants to do an EME simulation. I guess we're meant to parse that based on the presence of some eme specific arguments like eme_grid_spec? But I'm worried that this kind of handling could also become quite a validation nightmare. Like, currently we have various validators specific to different solvers. In the unified handling, when validating, we kinda need to figure out which solver exactly we should be validating for. And if the user passes something wrong, like incompatible components that make it unclear what solver we're going to solve for - or maybe unknown to the user, two separate solvers will be initialized without their intent - the error messages could become abstruse?

I think a lot of this validation would have to be done "post init" and based on some properties like def requires_eme. Also think we'd need pretty clear logging to let the user know before running what kind of solvers will be required. Basically I think this could be solved with a bit of changing to our validation approach, but also agree it could become unclear or convoluted, so yea not super sure now..

daquinteroflex commented 4 months ago

Yeah, I agree with your points that these are points of concern - especially in making sure this is clear for our users. I believe the motivation behind a single simulation isn't necessarily to replace any particular SolverSimulation, but simply generate the sets of coupled SolverSimulations for a corresponding physics with ease. Each individual simulation would still be edited after compilation from the TaskManager in so far as each simulation could still be edited and explored. Whist this provides all the utility in order to compile them easily, each "compiled" simulation would still be able to be modified and explored if we want to enable that.

I guess the motivation of this approach is really providing all the user-nice-functionality on one side, without having to interfere with the actual solver functionality on the other, and hence decoupling the development. Even if the validation is complex, we're automating something that maybe the users would have to do or at least provide them with an example should they want to to their own thing building onto the SolverSimulation. I guess these are still open questions on what is the best user experience in this multiphysics direction, or even on the standard flow? It comes down to how much abstraction we want to really provide, which will constrain generalization.

image

I also agree with the scene approach, and maybe there's a set of generic data types we always want to return for all simulations fundamentally like you suggest.