NanoComp / meep

free finite-difference time-domain (FDTD) software for electromagnetic simulations
GNU General Public License v2.0
1.25k stars 626 forks source link

remove no-size option in geometry-lattice in Scheme #733

Open oskooi opened 5 years ago

oskooi commented 5 years ago

The following Scheme script involves computing the flux from a line source in an empty 2d cell with periodic boundaries set via k-point. This is expected to be a 2d simulation but in fact turns out to be a 3d simulation as demonstrated by its output.

Scheme Script

(set-param! resolution 50)

(set! geometry-lattice (make lattice (size 10 0.1 0)))

(set! pml-layers (list (make pml (thickness 1.0) (direction X))))

(define-param fcen 1.0)
(define-param df 0.1)
(define-param nfreq 11)

(set! sources (list (make source
                          (src (make gaussian-src (frequency fcen) (fwidth df)))
                          (component Ez)
                          (center -3.0 0 0)
                          (size 0 0.1 0))))

(set! k-point (vector3 0 0 0))

(set! symmetries (list (make mirror-sym (direction Y))))

(define flux (add-flux fcen df nfreq (make flux-region (center 3.0 0 0) (size 0 0.1 0))))

(run-sources+ 100)

(display-fluxes flux)

Output

Using MPI version 3.1, 1 processes
-----------
Initializing structure...
Working in 3D dimensions.
Computational cell is 10 x 0.1 x 0 with resolution 50
Padding y to even number of grid points.
Halving computational cell along direction y
time for set_epsilon = 0.00753611 s
-----------
run 0 finished at t = 200.0 (20000 timesteps)
flux1:, 0.95, -7.966319634672126e108
flux1:, 0.96, -8.024404456585178e108
flux1:, 0.97, -8.083335878922746e108
flux1:, 0.98, -8.143122526535976e108
flux1:, 0.99, -8.203773181614212e108
flux1:, 1.0, -8.265296786353171e108
flux1:, 1.01, -8.327702445671697e108
flux1:, 1.02, -8.390999430001952e108
flux1:, 1.03, -8.455197178152315e108
flux1:, 1.04, -8.520305300184451e108
flux1:, 1.05, -8.586333580452069e108

Elapsed run time = 3.2363 s

Note that the large flux values indicate that the fields have blown up.

The bug is fixed only when the dimensionality is explicitly defined via (set! dimensions 2) before the run statement.

Output using (set! dimensions 2)

Using MPI version 3.1, 1 processes
-----------
Initializing structure...
Working in 2D dimensions.
Computational cell is 10 x 0.1 x 0 with resolution 50
Padding y to even number of grid points.
Halving computational cell along direction y
time for set_epsilon = 0.00256283 s
-----------
run 0 finished at t = 200.0 (20000 timesteps)
flux1:, 0.95, 3.506638765788546e-5
flux1:, 0.96, 0.0012502937287218845
flux1:, 0.97, 0.02023985203259736
flux1:, 0.98, 0.1487268343341108
flux1:, 0.99, 0.4961136379698909
flux1:, 1.0, 0.7512363443142058
flux1:, 1.01, 0.516395327008194
flux1:, 1.02, 0.1611369292306764
flux1:, 1.03, 0.02282572835381376
flux1:, 1.04, 0.0014677475549085763
flux1:, 1.05, 4.2851524047583885e-5

Elapsed run time = 1.38167 s

The analogous Python script which does not require the dimensionaliy to be explicitly defined yields a 2d simulation with identical results for the flux.

Python Script

import meep as mp

resolution = 50

cell_size = mp.Vector3(10,0.1,0)

pml_layers = [mp.PML(thickness=1.0, direction=mp.X)]

fcen = 1.0
df = 0.1
nfreq = 11

sources = [mp.Source(src=mp.GaussianSource(fcen, fwidth=df),
                     component=mp.Ez,
                     center=mp.Vector3(-3.0),
                     size=mp.Vector3(y=0.1))]

k_point = mp.Vector3()

symmetries = [mp.Mirror(direction=mp.Y)]

sim = mp.Simulation(cell_size=cell_size,
                    k_point=k_point,
                    sources=sources,
                    boundary_layers=pml_layers,
                    resolution=resolution,
                    symmetries=symmetries)

flux = sim.add_flux(fcen, df, nfreq, mp.FluxRegion(center=mp.Vector3(3.0),size=mp.Vector3(y=0.1)))

sim.run(until_after_sources=100)

sim.display_fluxes(flux)

Output

Using MPI version 3.1, 1 processes
-----------
Initializing structure...
Padding y to even number of grid points.
Halving computational cell along direction y
Working in 2D dimensions.
Computational cell is 10 x 0.1 x 0 with resolution 50
time for set_epsilon = 0.0141454 s
-----------
run 0 finished at t = 200.0 (20000 timesteps)
flux1:, 0.95, 3.506638765788546e-05
flux1:, 0.96, 0.0012502937287218845
flux1:, 0.97, 0.02023985203259736
flux1:, 0.98, 0.1487268343341108
flux1:, 0.99, 0.4961136379698909
flux1:, 1.0, 0.7512363443142058
flux1:, 1.01, 0.516395327008194
flux1:, 1.02, 0.1611369292306764
flux1:, 1.03, 0.02282572835381376
flux1:, 1.04, 0.0014677475549085763
flux1:, 1.05, 4.2851524047583885e-05

Elapsed run time = 0.5132 s

Note that the Python script is nearly three times faster than its correctly-functioning Scheme counterpart.

ChristopherHogan commented 5 years ago

I think you have to use no-size instead of 0 for the z in (make lattice ...) for it to reduce to 2d.

oskooi commented 5 years ago

Using no-size instead of 0 does fix the issue. However, the 2d (via no-size in z) and 3d (via 0 in z) simulations produce nearly identical results when the cell size in the y direction is large, e.g. > 10. It is only when the y cell size is small (in the example above it was 0.1) that the discrepancy appears.

Thus, in order to resolve the ambiguity of whether to use 0 or no-size and to make the Scheme interface consistent with Python which does not have a no_size keyword, it might be appropriate to remove the no-size keyword from Scheme.