pernak18 / g-point-reduction

Jupyter Notebook evolution of RRTMGP g-point reduction (AKA k-distribution optimization) that started with Menno's [k-distribution-opt](https://github.com/MennoVeerman/k-distribution-opt) repo
0 stars 0 forks source link

Parabola Diagnostics #27

Closed pernak18 closed 1 year ago

pernak18 commented 1 year ago

After reverting back to fd459233eed913074da10e6784f22fb420ec4588, we're looking at the data going into the quadratic regression a little more closely. Some questions we have:

pernak18 commented 1 year ago

1 is because of how i handled negative kmajor values -- i just assign a weight scale of 0 (init) instead of opting out of the calculation like Eli requested. this is not the most efficient route -- i should just not do these calculations -- but i didn't know how to quit the process without crashing the code. the easy workaround here is just to ignore trials where this is happening, since this effectively is what we were doing in the first place

with 2, i'm almost certain this is because of the k-distribution netCDF reassignment with 1plus and 2plus. 2plus is the last to be assigned, so that explains why its value is always the winner. so i need to re-implement some (but not all) of things i was trying to do in c5f6d702ea59c649624368431d7854e89d6d5a66

pernak18 commented 1 year ago

should be all set with 93c43bd64434d1b08a95490ab77507e246118fd6. with that commit, dCost points used in the parabola fit, associated trials and bands, weight scale (or zero-point from regression), and final dCost after recomputation with regression roots are printed to CSV files for each iteration