Open CarterDW opened 1 year ago
Totals | |
---|---|
Change from base Build 7020570041: | -0.2% |
Covered Lines: | 6524 |
Relevant Lines: | 6707 |
@CarterDW it seems like flake8 is failing, and the coverage (i.e. lines of code tested by new unit tests) has fallen a bit. I may be able to look at the actual implementation towards the end of the week.
Im not sure about the coverage but I went through and reformatted the code so hopefully it should pass flake8 now
@damonge , I am talking with @CarterDW about this PR now and will also do a code review ASAP. Would also be great to get your expert eyes on it (even quickly).
Thanks @damonge! (and Happy New Year!) Indeed, the nl bias x IA terms have been a goal for a long time, and we now have them! Thanks to some work from @CarterDW , as well as from the UTD group and a former student working with me, we have a couple of independent FAST-PT implementations of (most of) these terms. There is an active dev branch of FAST-PT where we are doing the cross checks.
Regarding benchmarks, we can generate the Pks from FAST-PT at a given cosmology and redshift (or two) and then store as a static dat file for comparison. It's not independent, since they both use FAST-PT, but it does test the plumbing and API.
Great news! Happy to have those terms in CCL when you think they're ready!
Regarding benchmarks: yep, that sounds good. I agree it's not independent, but it's a good test nevertheless, in case we modify anything further down the line (e.g. interpolation/extrapolation schemes or whatever) that changes the accuracy of the calculation. This is what we did for all the other fast-pt terms (in fact, you can probably build on this benchmark test script and on the script that was used to generate the associated data from fastpt).
(and Happy New Year!)
@jablazek @CarterDW can I check what the status of this PR is?
@CarterDW @jablazek can I check what the status is? Thanks!
Apologies for not seeing this email before, for the moment I think we’re shelving this PR, as we work to also add in galaxy bias x IA terms and try and fix some strange results, thanks!
On Tue, Jun 18, 2024 at 3:49 AM David Alonso @.***> wrote:
@CarterDW https://github.com/CarterDW @jablazek https://github.com/jablazek can I check what the status is? Thanks!
— Reply to this email directly, view it on GitHub https://github.com/LSSTDESC/CCL/pull/1137#issuecomment-2175554435, or unsubscribe https://github.com/notifications/unsubscribe-auth/BBRHUDGLPX4TCY4FUUGXXHDZH7YA3AVCNFSM6AAAAABJPR7G2OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZVGU2TINBTGU . You are receiving this because you were mentioned.Message ID: @.***>
OK, thanks!
Hi @damonge . Sorry for the slow updates. A bit more info: we did our code comparison between the two implementations of the NL bias x IA terms. Most of them agreed, with two exceptions. We have been tracking down the differences and should have that resolved soon.
However, @CarterDW , is there a reason to not finish the PR for the derivative term and then do a separate one for the NL bias terms and the t_ij terms?
The main concern I would have is the behavior of the derivative term, since that is one of the terms I’ve been looking at as the cause of some divergences in the tracers when it is set to a non zero value.
I will try and do some more debugging this week and if I can sort that then I agree, doing just the derivative term should provide no issues.
On Tue, Jun 18, 2024 at 6:48 AM Jonathan Blazek @.***> wrote:
Hi @damonge https://github.com/damonge . Sorry for the slow updates. A bit more info: we did our code comparison between the two implementations of the NL bias x IA terms. Most of them agreed, with two exceptions. We have been tracking down the differences and should have that resolved soon.
However, @CarterDW https://github.com/CarterDW , is there a reason to not finish the PR for the derivative term and then do a separate one for the NL bias terms and the t_ij terms?
— Reply to this email directly, view it on GitHub https://github.com/LSSTDESC/CCL/pull/1137#issuecomment-2175908429, or unsubscribe https://github.com/notifications/unsubscribe-auth/BBRHUDFVWY7MXLRFPUM22S3ZIANA7AVCNFSM6AAAAABJPR7G2OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNZVHEYDQNBSHE . You are receiving this because you were mentioned.Message ID: @.***>
@CarterDW @jablazek : any updates on this one?
@CarterDW , we sorted out the unexpected high-k behavior , right?
@jablazek @damonge yes, we figured out the divergent behavior I was seeing, so we should be able to finish this pull request. I'll spend this week resolving some of the things we discussed in here such as changing the implementation of checking for fast-pt, and adding the benchmark checks!
OK, great!
I implemented in pyccl/ept.py a new IA k2 derivative term, adding it to the existing auto and cross correlations with a ck constant defined in pyccl/tracers.py based on the pre-existing c1 constant. The user can either choose to use an in-built CCL k2 term, or call FAST-PT and use the newly implemented FAST-PT derivative term. This is set to use CCL's term by default so older versions of FAST-PT can still be used with ept.py.
I also added in pyccl/ept.py functionality to check if the installed FAST-PT can be imported, and then checks if the installed FAST-PT has the newest derivative term if the user wishes to use the FAST-PT derivative term