HomerReid / scuff-em

A comprehensive and full-featured computational physics suite for boundary-element analysis of electromagnetic scattering, fluctuation-induced phenomena (Casimir forces and radiative heat transfer), nanophotonics, RF device engineering, electrostatics, and more. Includes a core library with C++ and python APIs as well as many command-line applications.
http://www.homerreid.com/scuff-em
GNU General Public License v2.0
125 stars 50 forks source link

scuff-cas3D with temperature command line #113

Open wmkwmkwmk opened 7 years ago

wmkwmkwmk commented 7 years ago

Hi, I have a question about the Matsubara frequency calculation. When I use scuff-cas3D with command line --temperature 4, the output file '.byXi' shows like this:

#1: transform tag
#2: imaginary angular frequency
#3: z-force Xi integrand
#4: z-force error due to numerical Brillouin-zone integration 
-1.80 1.000000e-03 8.06344050e-01 7.81691349e+00 
-1.80 1.097705e-02 -4.23847930e-01 -1.28202895e-02 
-1.80 2.195410e-02 -3.97708110e-01 -4.61630833e-03 
-1.80 3.293115e-02 -3.84450390e-01 -1.09919678e-02 
-1.80 4.390821e-02 -4.03454230e-01 -1.45491034e-02 

My program is like this:

#!/bin/bash
echo beg_Silicon1.8_4K
ARGS=""
ARGS="${ARGS} --geometry AS.scuffgeo"
ARGS="${ARGS} --TransFile translist"
ARGS="${ARGS} --zforce"
ARGS="${ARGS} --temperature 4"
ARGS="${ARGS} --RelTol 2.0e-1"
scuff-cas3D ${ARGS}

How can I get the contribution of the zero Matsubara frequency? Do I need to divide 2 by myself?

HomerReid commented 7 years ago

SCUFF-CAS3D doesn't work at precisely zero frequency (\xi=0), so instead it approximates the contribution of \xi=0 by running a calculation at a very small nonzero frequency (\xi=0.001 by default, although you can modify this with the --xiMin command-line option).

SCUFF-CAS3D takes care of weighting this contribution by 1/2 in the Matsubara sum, so you don't need to worry about that.

Note: For compact objects, the procedure of replacing \xi=0 with a small nonzero frequency always works fine---the Casimir integrand as a function of \xi is very well approximated by a constant for all \xi below some threshold (which is geometry-dependent but typically around 0.1 for micron-scale bodies).

However, for extended objects this procedure may require more scrutiny. In particular, the Brillouin-zone integration may be poorly behaved at very small \xi values, so it is not necessarily trivial to identify the proper value of --xiMin to use. (Indeed, notice that the BZ integration error at \xi=0.001 is huge in your data---around 1000%!---whereas the BZ integration error is around 10% or smaller for \xi=0.01 and higher frequencies).

I would encourage you to look at the Casimir integrand at frequencies between \xi=1e-4 and \xi=1e-2 (do a run with a --xiFile specifying perhaps 20 logarithmically-spaced frequencies over that range). Try adjusting the BZ integration parameters to see if you need higher resolution in the BZ integration to get converged results at small \xi values. For example, try increasing the maximum number of BZ samples by setting --BZIMaxEvals 2000 or even ---BZIMaxEvals 10000 (the default is --BZIMaxEvals 1000).

Also, try plotting the BZ integrand as a function of Bloch wavevector (k_x, k_y) for a couple of different --Xi values. This may give you some insight into how to configure the BZ integration tolerances.

wmkwmkwmk commented 7 years ago

Hi, I followed your suggestion. Before try

look at the Casimir integrand at frequencies between \xi=1e-4 and \xi=1e-2 (do a run with a --xiFile specifying perhaps 20 logarithmically-spaced frequencies over that range)

I use--BZIMaxEvals to improve the resolution in BZ integration. However, it doesn't work. I don't sure whether I use it in a correct way. Here is my code:

#!/bin/bash
echo beg_Silicon1.95_4K_5000_2
ARGS=""
ARGS="${ARGS} --geometry AP.scuffgeo"
ARGS="${ARGS} --TransFile translist"
ARGS="${ARGS} --zforce"
ARGS="${ARGS} --RelTol 2.0e-1"
ARGS="${ARGS} --BZIMaxEvals 5000"
ARGS="${ARGS} --XiFile fre"
scuff-cas3D ${ARGS}

I tried Xi=1e-3, 5e-3 and 1e-2 and BZIMaxEvals 1000, BZIMaxEvals 2000 and BZIMaxEvals 5000. However, there is not any difference between different BZIMamEvals setting. The output file looks like:

# data file columns: 
#1: transform tag
#2: imaginary angular frequency
#3: z-force Xi integrand
#4: z-force error due to numerical Brillouin-zone integration 
#5: BZIMaxEvals
-1.95 1.000000e-03 -5.43399983e+00 9.50803717e-01 1000
-1.95 1.000000e-03 -5.43405270e+00 9.50803667e-01 2000
-1.95 1.000000e-03 -5.43387806e+00 9.50803797e-01 5000
-1.95 5.000000e-03 -5.94686144e+01 8.91218282e+00 1000
-1.95 5.000000e-03 -5.94686141e+01 8.91218302e+00 2000
-1.95 5.000000e-03 -5.94686066e+01 8.91218283e+00 5000
-1.95 1.000000e-02 -4.59994968e+00 -4.26805685e+00 1000
-1.95 1.000000e-02 -4.59994991e+00 -4.26805686e+00 2000
-1.95 1.000000e-02 -4.59994968e+00 -4.26805685e+00 5000

Do I use BZIMaxEvals in a wrong way?

Furthermore, Seems frequency=1e-3 is not converge? I also plot ".byXikBloch" file where BZIMamEvals=5000. I put it here please check: lowest_frequency_k_vs_q_5e-3notgood_zoomout lowest_frequency_k_vs_q_5e-3notgood Seems fre=5e-3 has two bad points. Do you think I need to keep improving the BZ integrand or use 1e-03 is fine?

wmkwmkwmk commented 7 years ago

Here are the results for Xi=1e-4~1e-2 where BZIMamEvals=10000.

%1: transform tag
%2: imaginary angular frequency
%3: z-force Xi integrand
%4: z-force error due to numerical Brillouin-zone integration 
%%-1.95 1.000000e-04 4.69143336e+01 1.83108556e+01 
%%-1.95 1.274275e-04 2.33759870e+01 1.60650483e+01 
-1.95 1.623777e-04 1.68441313e+00 1.30307784e+01 
-1.95 2.069138e-04 1.58760061e+01 1.10253430e+01 
%%-1.95 2.636651e-04 3.81176654e+01 1.11432941e+01 
-1.95 3.359818e-04 1.59581745e+01 8.47392117e+00 
-1.95 4.281332e-04 -1.01519908e+01 -1.16811448e+01 
-1.95 5.455595e-04 5.39313123e+00 2.87460744e+00 
%-1.95 6.951928e-04 -2.65817903e+02 2.02442051e+00 
%-1.95 8.858668e-04 3.44181131e+02 1.33032380e+00
%%-1.95 1.128838e-03 -3.84178217e+01 5.60212002e-01 
%%-1.95 1.438450e-03 -1.95138254e+01 3.43726648e-01 
-1.95 1.832981e-03 -3.24519495e+00 3.86646497e+00 
-1.95 2.335721e-03 -2.31713036e-01 -4.40188597e-01 
-1.95 2.976351e-03 -8.66738207e+00 7.51023372e-01 
-1.95 3.792690e-03 -6.69523029e+00 2.53438705e+00 
-1.95 4.832930e-03 3.21542506e+00 6.37750180e+00 
-1.95 6.158482e-03 -2.96225146e+00 1.66208674e+01 
-1.95 7.847600e-03 -5.75400611e+00 -1.49980961e+01 
-1.95 1.000000e-02 -4.59994968e+00 -4.26805685e+00

I plot "Xi" and "byXikBloch" as follow. Originally I simulate 20 points, however, after I plot "byXikBloch", 7 of them have bad points. So I have removed them(%% in datasheet).

1e-4to1e-2_xi 1e-4to1e-2_byxikbloch

Do you have any suggestion to improve it?

HomerReid commented 7 years ago

However, there is not any difference between different BZIMamEvals setting.

That's because you used the command-line option --RelTol 2.0e-1, which means you are only asking for answers correct to within \pm 20%. The numerical integrator is achieving this tolerance before it ever reaches the number of evaluations specified by --BZIMaxEvals. The purpose of that option is to limit the numerical integrator from using excessively many samples in cases where it is having trouble achieving your requested tolerance. If your requested tolerance is very lenient, as is the case here, that upper limit is never reached.

Since your BZ integrand is symmetric w.r.t. sign flip of k_x, you can use the command-line option --BZSymmetryFactor 2 to reduce the cost of BZ integration by half.

What is the geometric configuration described by transformation label -1.95 ? Does this correspond to closely-separated bodies? If so, try running calculations with the bodies at further distances from each other and see if that improves the convergence.

Also, it would be interesting to study the convergence as a function of integrand samples. For this purpose you can bypass the adaptive integrator by eliminating the --BZIMaxEvals and instead saying e.g. --BZIOrder 11 to request 11 samples. It would be nice to see how the BZ integral converges for small \xi values as you increase --BZIOrder from (say) 11 to (say) 99. (Only odd values of --BZIOrder are supported, so you might try values of 11, 15, 21, 25, 31, 35, ..., 99, or some subset.)

To summarize, I would suggest:

1 For an easier geometric configuration, and

2 for a few low frequencies, maybe --Xi={0.001, 0.003, 0.005, 0.01}

3 run with --BZSymmetryFactor 2 --BZIOrder {11, 15, 21, ..., 99}

and plot convergence of the .byXi data vs. --BZIOrder. Hopefully you will see curves that are jagged and spiky for small values of --BZIOrder but which eventually smooth out and converge to stable values for higher --BZIOrder values, and that will define the threshold number of BZ samples you need to get good accuracy at your low frequencies.

HomerReid commented 7 years ago

PS. An undocumented hack is that you can use negative values of --BZIOrder (like --BZIOrder -33) to request integration using 33 samples but with the samples chosen by a different method (namely, rectangular-rule instead of Clenshaw-Curtis quadrature). I don't think that will improve things in your case, but it might be worth trying.

HomerReid commented 7 years ago

Whoops, looking back through your command-line arguments I see that you were setting --RelTol 2.0e-1, whereas I mistakenly thought you were setting BZIRelTol 2.0e-1. The former controls the accuracy of the \xi integration (and the Matsubara summation), while the latter controls the accuracy of the BZ integration. My mistake.