phoebe-project / phoebe2

PHOEBE - Eclipsing Binary Star Modeling Software
http://phoebe-project.org
GNU General Public License v3.0
79 stars 30 forks source link

WD-supported passbands pack the entire atmcof* tables needlessly #362

Open aprsa opened 4 years ago

aprsa commented 4 years ago

pb.wd_data currently contain the entire atmcof/atmcofplanck files. This is clearly superfluous and needs fixing. The python part in passbands.py that extracts the particular passband info is written but not used anywhere (pb.extern_wd_atmx in import_wd_atmcof()), and the libphoebe part needs to be adapted to allow for lookup from extracted subtables.

horvatm commented 4 years ago

The entire atmcof + atmcofplanck files use only 4 MB of contiguous memory and therefore loading everything is very efficient. Separately storing or just loading passbands from files is of course possible, but not advisable.

This change would mean somebody needs to read a very dense C++ code generated from WD's fortran code. Nevertheless loading of files could be performed by numpy loadtxt to avoid using libphoebes loader (libphoebe.wd_readdata), which does not use strings for filenames.

aprsa commented 4 years ago

4MB out of ~9MB per passband (in absence of extinction tables) is quite hefty; my suggestion would be to grab the coefficients for just that passband and to use 0 offset for lookup in the fortran function. Storing just a single passband info reduces ~4MB to ~160k, thus shaving ~40% of non-extincted passbands. Worth it or not?

horvatm commented 4 years ago

I don't know. Eventually, I guess, we can try to do that.

I have looked at the code and we have the following dependency: function planckint; uses table atmcofplanck function atmx: uses function planckint and table atmcof

The tables are

Actually in table atmcof are only 154476 non-zero real numbers and can be made to consume only 1.2 MB.

I think the best strategy is to hard code the function planckint and then parse individual passband/atmospheric coefficients from table atmx in a smart way.

horvatm commented 3 years ago

@aprsa I this still relevant? Then I suggest that we have new interface to wd files: ifil_planck_table = wd_readdata_planck(filename_planck, ifil) ifil_atm_table = wd_readdata_atm(filename_atm, abunin, ifil, ) and then wd_planckint(t, ifil_planck_table) wd_atmint(t, logg, ifil_planck_table, ifil_abunin_atm_table)

If my calculations are correct size(ifil_planck_table)=50 numbers = 1250/25 filters size(ifil_abunin_atm_table) = 528 numbers = 250800/25filters/19abunins

aprsa commented 3 years ago

I'm always in favor of optimizing the content, and indeed it would be better to pack 1/25-th of the actually used data. That said, I wouldn't consider this urgent. We're currently working on adding tlusty, tmap and fastwind atmospheres into phoebe, which will increase the passband sizes.

horvatm commented 3 years ago

@aprsa I am just cleaning stuff assigned to me or connected to libphoebe.

This new atmospheres will they be inside WD framework?

aprsa commented 3 years ago

Nope, they're independent of any other atmosphere.

horvatm commented 3 years ago

@aprsa Is abundance/metallicity fixed for a star or chosen atmosphere?

aprsa commented 3 years ago

In most circumstances, but not generally. So we do need to pack all metallicities and interpolate across them.

horvatm commented 3 years ago

@aprsa @kecnry

In branch wd_filter I introduced

  d = libphoebe.wd_readdata_filter(bytes(planck, encoding='utf8'), bytes(atm, encoding='utf8'), ifil)

for reading ith filter data. For computing only ith filter WD Planck intensity do:

  libphoebe.wd_planckint_filter(temps, ifil, d["planck_table"])

and for computing only ith filter WD atm intensity do

   libphoebe.wd_atmint_filter(t, logg, abunin, ifil, d["planck_table"], d["atm_table"])

The new routines are drop in replacements for the old one by just adding _filter suffix to its names.