sirocco-rt / sirocco

This is the repository for Sirocco, the radiative transfer code used to model winds in AGN and other systems
GNU General Public License v3.0
26 stars 24 forks source link

Luminosity conservation with BF_SIMPLE_EMISSIVITY approach #648

Closed saultyevil closed 5 months ago

saultyevil commented 4 years ago

I'm a little confused about some of the reporting for macro atoms. Currently it looks like in my TDE models that the total luminosity in the simulations is not conserved, even with reflecting boundaries.

To double check it wasn't just an optical depth effect in the TDE models, I ran the CV macro bench mark in the examples/extended directory - I made a slight modification to make the disc reflecting however to make things a bit clearer. I ran this model using the current dev version and using py82i from August 2018.

dev

!!python: Total photon luminosity before transphot 4.477575835761e+34

 Ion. Cycle 2/2 of cv_macro : Photon          0 of     100000 or    0.0 per cent 

!!python: photon transport completed in: 173.884006 seconds
Error: trans_phot: 139 photons were lost due to DFUDGE (=1.0000e+05) pushing them outside of the wind after scatter
!!python: Total photon luminosity after transphot  4.504574929609e+34 (absorbed/lost  2.699909384749e+32). Radiated luminosity 4.499440929011e+34
!!python: luminosity lost by adiabatic kpkt destruction 0.000000000000e+00 number of packets 0
!!python: luminosity lost to low-frequency free-free    0.000000000000e+00 number of packets 0
!!python: luminosity lost by being completely absorbed  9.001218748777e+21 
!!python: luminosity lost by too many scatters          0.000000000000e+00 
!!python: luminosity lost by hitting the star           5.134000596935e+31 
!!python: luminosity lost by hitting the disk           0.000000000000e+00 
!!python: luminosity lost by errors                     0.000000000000e+00 
!!python: luminosity lost by the unknown                0.000000000000e+00 

It looks like we gained a little bit of luminosity with the current dev build. The fact that luminosity is lost to the star is I believe linked to issue #645, but it's a small enough fraction in this case compared to the TDE models. It's strange because we are losing photons due to dfudge, or the star, but still have gained some luminosity somewhere. There is no adiabatic heating, but there is a little cooling (see attached diag file for photon generation diagnostics).

cv_macro_0.txt

82i

!!python: Total photon luminosity before transphot 4.477570309373e+34

Photon       0 of  100000 or  0.000 per cent 

!!python: Total photon luminosity after transphot  4.477570277948e+34 (absorbed/lost  -3.142473688454e+26). Radiated luminosity 4.477570277948e+34
!!python: luminosity lost by adiabatic kpkt destruction 0.000000000000e+00 number of packets 0
!!python: luminosity lost by being completely absorbed  0.000000000000e+00 
!!python: luminosity lost by too many scatters          0.000000000000e+00 
!!python: luminosity lost by hitting the star           0.000000000000e+00 
!!python: luminosity lost by hitting the disk           0.000000000000e+00 
!!python: luminosity lost by errors                     0.000000000000e+00 
!!python: luminosity lost by the unknown                0.000000000000e+00 

We lose a a small enough fraction of luminosity this time, but to nothing apparently.

I think I must be missing something here. Do we expect the luminosity to balance? I was under the assumption that as we do not modify the photon weights in macro atom mode, one would expect what we put in to come out. Is it possible there is an error in reporting here where weight is lost? Or am I misunderstanding or missing something?

Tagging @kslong @jhmatthews @ssim because this is somewhat urgent for our TDE models.

kslong commented 4 years ago

I wonder if this is related to #285, but I think we need to get the rest of the issues associated with #648 sorted first.

saultyevil commented 4 years ago

I have been digging into this issue a bit more today. I've potentially found a line of code which could be causing the problem.

So, there's a comment in Python which says in macro atom mode, b-f and other continuum processes do not reduce the photon weight. But it appears to me that photons may have their weight modified by electron (compton) scattering. So, from what I understand from the header in emission.c, nres = -1 indicates that the photon will scatter via photon scattering.

In scatter, b-b, photoionisation and f-f are all treated in the macro approach. For electron scattering, we call compton_dir for both simple and macro atom. We then calculate the fractional change in energy due to scattering - if the photon is low energy then the scattering is elastic, which I believe has a cutoff frequency ~1e14 Hz . We then modify the photon frequency and weight by this fraction.

Is this something which is intended to happen?

Edit: maybe this is related to issue #295?

ssim commented 4 years ago

Worth noting the usual nomenclature question here - this is not really about macro atoms but about divisible versus indivisible packets (sorry, this is probably mostly my fault because back in 2005 when adding "macro atoms" to Python we were not precise enough in the language we used in the comments)!

Anyway, it's not macro atoms that say anything about needing to reduce weights really, it's the indivisibility argument. In the indivisible scheme it would be true that there would be no weight reduction, and those parts of the code I wrote to facilitate macro atoms do (or at least did) try to impose that assumption.

The particular case you mention - related to Compton scattering - might well be of interest (although not really because of anything to do with macro atoms exactly). In particular, if Compton scattering is always treated in the "cold electron" regime ...i.e. it is always a weight reduction of the photon packets in the fluid frame... then I can imagine that eventually this leads to a problem if the Compton optical depths are very large: i.e. the photons field will be able to keep cooling and cooling and cooling (albeit gradually) because the electrons are arbitrarily cold - this would ultimately be a loss of energy from the simulation, which might be problematic? Is that what we do with the weight reduction in Compton scattering - or do we account for inverse Compton scattering somehow to avoid that? (I.e., eventually, the photons and electrons come into Compton equilibrium, - if that's a name? - where inverse Compton scattering is effectively an energy source for the photon pool - do we already do that? If we don't maybe that's relevant to the problem here?).

lazygun37 commented 4 years ago

Agreed that we probably should start talking about indivisible vs divisible packet approaches. (Actually, I'm not sure even that really captures it, does it? It's really about whether individual packets conserve energy or not -- we don't really "divide" photons even in "simple atom" mode.)

Anyway, just to add some context to Ed's issue and perhaps answer part of your questions, Stuart. We noticed that in our macro atom (i.e. indivisible packet) runs the reported luminosities before and after trans_phot are often not the same. They can go either way, actually. And this is apparently not associated with adiabatic heating/cooling, which is the one case I could see where this might happen.

So we agreed that Ed would go through the code and literally check where the photon weights are changed -- the idea was that they should basically never be changed in sections of the code that are followed in macro mode.

What Ed found was that the only case where the weights do seem to get touched in macro mode is electron scattering. Specifically, the logic is this. Inside "scatter", we have the following:

/ SS July 04 Next block is modified to include the thermal trapping model for anisotropic scattering. The code for this has been moved from trans_phot to here so that this model can work with macro atoms. For macro atoms the code above decides that emission will occur in the line - we now just need to use the thermal trapping model to choose the direction. /

if (*nres == -1) //Its an electron scatter, so we will call compton to get a direction { p->freq = freq_comoving; //This is the photon frequency in the electron rest frame calculated earlier in the routine compton_dir (p, xplasma); //Get a new direction using the KN formula v_dop = dot (p->lmn, v); //Find the dot product of the new velocity with the wind p->freq = p->freq / (1. - v_dop / VLIGHT); //Transform back to the observers frame

}

So, to me, it looks like the idea here may have been that compton_dir would only get a new direction. But, actually, compton_dir also changes the photon weight:

compton_dir (p, xplasma) PhotPtr p; // Pointer to the current photon PlasmaPtr xplasma; // Pointer to current plasma cell

{ double f_min, f_max, f; //Fractional energy changes - E_old/E_new - minimum possible, maximum possible, actual as implied by random cross section double n, l, m, phi, len; //The direction cosines of the new photon direction in the frame of reference with q along the photon path struct basis nbasis; //The basis function which transforms between the photon frame and the observer frame double lmn[3]; / the individual direction cosines in the rotated frame / double x[3]; /photon direction in the frame of reference of the original photon / double dummy[3], c[3];

x1 = PLANCK * p->freq / MELEC / VLIGHT / VLIGHT; //compute the ratio of photon energy to electron energy. In the electron rest frame this is just the electron rest mass energy

n = l = m = 0.0; //initialise some variables to avoid warnings

if (x1 < 0.0001) //If the photon energy is much less than electron rest mass energy, we just have Thompson scattering - scattering is isotropic { randvec (lmn, 1.0); //Generate a normal isotropic scatter f = 1.0; //There is no energy loss stuff_v (lmn, p->lmn); } else { sigma_rand = random_number (0.0, 1.0); //Generate a random number between 0 and 1 - this represents a randomised cross section (normalised to the maximum which out photon packet sees f_min = 1.; //The minimum energy change - i.e. no energy loss - the scattering angle is zero - the photon does not chage direction f_max = 1. + (2. * x1); //The maximum energy change - this occurs if the scattering angle is 180 degrees (i.e. the photon bounces straight back.) f=e_old/e_new sigma_max = sigma_compton_partial (f_max, x1); //Communicated externally to the integrand function in the zbrent call below, this is the maximum cross section, used to scale the K_N function to lie between 0 and 1. //This is essentually the chance of a photon scattering through 180 degrees - or the angle giving the maximum energy loss // f = zbrent (compton_func, f_min, f_max, 1e-8); //Find the zero point of the function compton_func - this finds the point in the KN function that represents our randomised fractional energy loss z_rand. f = zero_find (compton_func, f_min, f_max, 1e-8); //Find the zero point of the function compton_func - this finds the point in the KN function that represents our randomised fractional energy loss z_rand.

/We now have the fractional energy change f - we use the 'normal' equation for compton scattering to obtain the angle cosine n=cos(\theta) for the scattering direction/

n = (1. - ((f - 1.) / x1)); //This is the angle cosine of the new direction in the frame of reference of the photon - this gives a 2D scattering angle

if (isfinite (len = sqrt (1. - (n * n))) == 0)      //Compute the length of the other angle cosines - the isfinite is to take care of the very rare occasion where n=1!
  len = 0.0;                // If n=1, then the photon has either been undeflected or bounced straight back. Both are vanishingly unlikely but need to be treated.
phi = 0.0;                  //no need to randomise phi, the random rotation of the vector generating the basis function takes care of this

l = len * cos (phi);        //compute the angle cosines of the other two dimensions.
m = len * sin (phi);

randvec (dummy, 1.0);       //Get a random vector of length 1 - this is returned in the array dummy
cross (dummy, p->lmn, c);   //c will be perpendicular to p->lmn (the original photon direction) *and* the random vector just computed
create_basis (p->lmn, c, &nbasis);  //create a basis with the first axis in the direction of the original photon direction, c will be perpendicular to the photon direction. Orientaion of the y/z axes are will give  arandomization of the phi axis

x[0] = n;                   //This is the cosine direction of the new photon direction, in the frame of reference where the first axis is in the original photon direction
x[1] = l;
x[2] = m;

project_from (&nbasis, x, lmn);     /* Project the vector from the FOR of the original photon into the observer frame */
renorm (lmn, 1.0);          //Make sure the length of the direction vector is equal to 1
stuff_v (lmn, p->lmn);      //Put the new photon direction into the photon structure

}

p->freq = p->freq / f; //reduce the photon frequency by the fractional energy change p->w = p->w / f; //reduce the photon weight by the same ammount to conserve photon numbers return (0); }

So the question is whether this is correct for the macro case -- it seems to me that it probably can't be? Isn't the photon weight change here an interaction with the thermal pool that -- in macro mode -- would need to be handled via k-packets? Is the thermal effect of this weight change on Te accounted for anywhere?

Apologies if I misunderstood your question, Stuart, and the above doesn't address anything relevant to it. But hopefully it's at least a little bit useful anywa...

Cheers,

Christian

On 28/11/2019 18:37, ssim wrote:

Worth noting the usual nomenclature question here - this is not really about macro atoms but about divisible versus indivisible packets (sorry, this is probably mostly my fault because back in 2005 when adding "macro atoms" to Python we were not precise enough in the language we used in the comments)!

Anyway, it's not macro atoms that say anything about needing to reduce weights really, it's the indivisibility argument. In the indivisible scheme it would be true that there would be no weight reduction, and those parts of the code I wrote to facilitate macro atoms do (or at least did) try to impose that assumption.

The particular case you mention - related to Comptons scattering - might well be of interest (although not really because of anything to do with macro atoms exactly). In particular, if Compton scattering is always treated in the "cold electron" regime ...i.e. it is always a weight reduction of the photon packets in the fluid frame then I can imagine that eventually this leads to a problem if the Compton optical depths are very large: i.e. the photons field will be able to keep cooling and cooling and cooling (albeit gradually) because the electrons are arbitrarily cold - this would ultimately be a loss of energy from the simulation, which might be problematic? Is that what we do with the weight reduction in Compton scattering - or do we account for inverse Compton scattering somehow to avoid that? (I.e., eventually, the photons and electrons come into Compton equilibrium, - if that's a name? - where inverse Compton scattering is effectively an energy source for the photon pool - do we already do that? If we don't maybe that's relevant to the problem here?).

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fagnwinds%2Fpython%2Fissues%2F648%3Femail_source%3Dnotifications%26email_token%3DABEXZAHXPEEBHRL62LOV3FTQWAFXPA5CNFSM4JSGULQKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEFNIO3Q%23issuecomment-559581038&data=01%7C01%7CC.Knigge%40soton.ac.uk%7Cbae1f65bcde14eab6a2d08d774320ec7%7C4a5378f929f44d3ebe89669d03ada9d8%7C0&sdata=PJhHOpg0VkcvBemTgSRkqztcHn8yZsIA5IgzmE8nv0k%3D&reserved=0, or unsubscribehttps://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FABEXZABS4PTC5MRIW4H6DBLQWAFXPANCNFSM4JSGULQA&data=01%7C01%7CC.Knigge%40soton.ac.uk%7Cbae1f65bcde14eab6a2d08d774320ec7%7C4a5378f929f44d3ebe89669d03ada9d8%7C0&sdata=TO9nh3eNMOJxSj8D%2Fs2mPcTA5cQB1C1LBNHYzvPdsmo%3D&reserved=0.

--

Professor Christian Knigge Physics & Astronomy University of Southampton Southampton SO17 1BJ

saultyevil commented 4 years ago

To update on this issue - the Compton scattering code was actually a red herring. What is actually the problem here is the new simple macro BF emissivity scheme implemented during our meetup in Belfast.

Following on from the last telecon, I ran a couple of tests to see how the luminosity is conserved.

tde_matom_conservation.pf.txt

Test 1

The first test was a to use a model with only H and He included, the idea being that since the new scheme is used only for elements without macro atoms that if we do not include them, then the weight of photon packets should not be changed by the new scheme.

When I ran this model, everything was good and luminosity is conserved to within the accuracy allowed by our Doppler shifts, as you can see in the plot below.

tde_matom_conservation_H_He_only_lum_conservation

Test 2

My next test was to keep running the test model for longer until it converged, since as the model becomes more converged we should expect the luminosity change to decrease as well. Ok, so this time the model converged to ~75% and it does look like the luminosity difference is converging.. but maybe not to where we expect?

tde_matom_conservation_lum_conservation

I'm continuing to run the model for longer over the weekend to see it changes. But @jhmatthews @ssim , do either of you have any ideas what could be going on?

kslong commented 4 years ago

OK, so I have also run two models, and I believe these results are consistent with what @saultyevil has been saying. I have been using this pf file and variants of it.

tde30.pf.txt

It is similar to tde_cv_slightly_smaller.pf except as indicated below:

< Wind.dim.in.x_or_r.direction               15
< Wind.dim.in.z_or_theta.direction           15
< Photons_per_cycle 1e6
---
> Wind.dim.in.x_or_r.direction               30
> Wind.dim.in.z_or_theta.direction           30
> Photons_per_cycle 1e7

When run with the new treatment of simple macro atoms, I find poor convergence properties and emitted luminosities exceed the expected luminosity At the end of 15 cycles, it was about 65% conserved

image

and the emitted luminosity jumped around from cycle to cycle and was 1.1e44, compared to 7.3e43 expected at cycle 15.

This is how it looks after each cycle (which by the way was run with -p option) There is not a significant improvement with time, which is something I might have expected if it was going to come out correctly as we headed towards convergence.

!!python: Total photon luminosity after transphot  9.836728452916e+43 (absorbed/lost  2.509837084649e+43). Radiated luminosity 9.836634438219e+43
!!python: Total photon luminosity after transphot  7.414225633556e+43 (absorbed/lost  8.723636226696e+41). Radiated luminosity 7.413929808872e+43
!!python: Total photon luminosity after transphot  1.429467120295e+44 (absorbed/lost  6.967735477853e+43). Radiated luminosity 1.429416138025e+44
!!python: Total photon luminosity after transphot  8.150090814303e+43 (absorbed/lost  8.231659782543e+42). Radiated luminosity 8.149234376252e+43
!!python: Total photon luminosity after transphot  9.789268508501e+43 (absorbed/lost  2.462328587004e+43). Radiated luminosity 9.789075424411e+43
!!python: Total photon luminosity after transphot  1.087656624585e+44 (absorbed/lost  3.549691358315e+43). Radiated luminosity 1.087602448251e+44
!!python: Total photon luminosity after transphot  1.091956440424e+44 (absorbed/lost  3.592660902420e+43). Radiated luminosity 1.091891491901e+44
!!python: Total photon luminosity after transphot  9.705919255051e+43 (absorbed/lost  2.379058664629e+43). Radiated luminosity 9.705655887584e+43
!!python: Total photon luminosity after transphot  1.309830366268e+44 (absorbed/lost  5.771395435914e+43). Radiated luminosity 1.309804964312e+44
!!python: Total photon luminosity after transphot  1.116176368458e+44 (absorbed/lost  3.834903419511e+43). Radiated luminosity 1.116148133538e+44
!!python: Total photon luminosity after transphot  1.255748098227e+44 (absorbed/lost  5.230635234821e+43). Radiated luminosity 1.255715716614e+44
!!python: Total photon luminosity after transphot  1.347520035158e+44 (absorbed/lost  6.148341472757e+43). Radiated luminosity 1.347480569974e+44
!!python: Total photon luminosity after transphot  9.911795929846e+43 (absorbed/lost  2.584940490788e+43). Radiated luminosity 9.911317798881e+43
!!python: Total photon luminosity after transphot  1.148433930293e+44 (absorbed/lost  4.157487537384e+43). Radiated luminosity 1.148388638425e+44
!!python: Total photon luminosity after transphot  1.227268631220e+44 (absorbed/lost  4.945845741107e+43). Radiated luminosity 1.227198936322e+44

I then changed the python to run with the old macro atom treatment. I found improved convergence properties as shown below:

image

Here I found that the luminosities match at 7.3e43.

kslong commented 4 years ago

I have done more to look at this situation. Specifically, I have run several versions of the tde30 model indicated above, most recently two versions of the post-Belfast version of bf treatment for simple atoms. The two versions differ only in that one is run with h20_hetop_standard80 and the other is run with just standard80, so the second version contains no macro atoms. Neither version converges very well. The total spectra are fairly different.

With H and He as macro atoms

cont spec_tot

Without H and He as macro atoms

retro30 spec_tot

Also, the balance between the amount of energy going Into and out of the ionization pool is rather different. This is for the entire wind

Presumably the reason that the energy flow i into and out of the pool is higher without H and He as macro atoms, is that these H and He are now simple atoms, and hence participate in adding and subtracting energy from the ionization pool. My initial thought was that one way to interpret the fact that the ratio is closer to 1 in the case of no true macro atoms was that something is wrong when a a fb excitation from a simple atom results in a recombination to a macro-atom, or vice versus. This is something that might need investigating. But it could also be that with H and He, we are now counting all of these as part of the energy flow and these are not as badly balanced as those of the metals.

If you look at the ratio of macro_bf_in to macro_bf_out , one sees that there is more scatter with macro atoms. My guess though is that this is simply photon numbers, unfortunately

With

cont ratio

Without

retro30 ratio

One (other) thing that I don't understand though about all of this. The radiated luminosity with and without true macro atoms is larger than the true luminosity. It is 1.06e44 with macro atoms, and 2.67e44 without macro atoms. It should be 7e43. So luminosity problem is worse without any true macro atoms.

It's not clear to me exactly where to go from here.

saultyevil commented 4 years ago

From our discussions in the telecon, I ran two new models. One without any inner shell ionisation data and a second model without any recombination rate data.

I am using the same model from this earlier post.

Recombination Rate

I believe the idea of not reading in the recombination data is that Python will default to using the Milne relation to calculate the recombination rate. This way, Python should use the photoinsation rate to calculate the recombination and so these two processes should be better balanced. Python does do this, but it also reports a lot of errors when this happens.

tde_matom_conservation_luminosity_balance

When I comment out this data, it has little effect on reducing the amount of luminosity gained during trans_phot. This model is about as well converged as the model in my earlier post at about ~75%.

I can see from the diag files that we have not read in any recombination coefficients, so I don't think I've made an error there.

We have read in   0 Dielectronic recombination coefficients
We have read in   0 Badnell totl Radiative rate coefficients
We have read in   0 Badnell GS   Radiative rate coefficients over the temp range 0.000000e+00 to 1.000000e+99

tde_matom_conservation_0.diag.txt

Inner Shell Ionisation

I think Stuart's reasoning here might be that inner shell ionisation is a potentially troublesome bound-free process in Python?

Either way, removing these data made no difference either. This time, the simulation peaked at 81% convergence before finishing on 64% convergence overall.

tde_matom_conservation_luminosity_balance

We have read in   0 Inner shell photoionization cross sections
                  0 have matching electron yield data

So no luck here!

The only common error between these runs is,

Error: scatter: kpkt probability (-1.4935e-03) < 0, zeroing
scatter: photon edge frequency: 9.9883e+15, comoving frequency 9.9734e+15

But I'm not entirely sure if this is related to our new treatment?

kslong commented 4 years ago

I basically confirm what @saultyevil has described. I ran a model using this masterdata file and I still see considerable excess luminosity in the result, and in fact it is worse than the same model run the old way.
h10_hetop_reduced.dat.txt

If anyone thinks one should include or leave out various datafiles lest me know. @Higginbottom it would be good if you could take a look.

For reference here is the .pf file I am using for this.
reduced.pf.txt

jhmatthews commented 5 months ago

Upweighting was turned OFF by default by PR https://github.com/agnwinds/python/pull/1051 - due to known issues with luminosity conservation and/or large reweighting factors. Some of the issues are already documented on RTD. Upweighting can still be used via an advanced mode with the -d flag.