Open lea-hagen opened 4 years ago
Not sure. This sounds like it will do many of the same tests as are already done. Unless there is a way to just test the parts that are not provided by the current tests, maybe this code does not need tests? At least at this point.
There is a limit to the execution time for tests on travis and I worry we will hit the limit.
Not sure. This sounds like it will do many of the same tests as are already done. Unless there is a way to just test the parts that are not provided by the current tests, maybe this code does not need tests? At least at this point.
Hmm, good point. I think it might work to download the files for the earlier steps, so it'll skip generating those, and just run the part where it makes the extinguished grid. Since this is the only code that's set up to make subgrids, I want to at least test that part.
There is a limit to the execution time for tests on travis and I worry we will hit the limit.
What's the limit? My plan was to do the same grid settings as phat_small, and make it into two subgrids, so I wouldn't expect it to be too much longer.
I found the time limit info here:
Travis CI has specific time limits for each job, and will stop the build and add an error message to the build log in the following situations:
- When a job produces no log output for 10 minutes.
- When a job on a public repository takes longer than 50 minutes.
- When a job on a private repository takes longer than 120 minutes.
Given the 50 minute limit per job, I think running the whole create_physicsmodel
script will be fine. Can probably create individual jobs for each of the other scripts in tools/run/ also.
Go for it. We can always disable tests that take too long. :-)
I've found a possible problem: the file beast_example_phat_spec_w_priors.grid.hd5
in the remote data has 5164 values for M_act
, and the one I'm generating has 5152 values. (This is just where the test failed, so I don't know if there are more disagreements.) @karllark, do you know what grid settings you used to make that file? I'm using the usual datamodel for phat_small. I'm not sure if the discrepancy would be from different datamodel files or if the spectral models on the website have changed in the last ~6 months.
You can see what is being used by looking at the testing code for this step. Specifically https://github.com/BEAST-Fitting/beast/blob/master/beast/physicsmodel/stars/tests/test_spectral_grid.py
Other test code might also need to be consulted. Check them in out in the beast/physicsmodel/tests and beast/physicsmodel/stars/tests.
Thanks! I was looking in the wrong folder and didn't see those tests.
Ah, I figured out the discrepancy - in create_physicsmodel
, we remove isochrones that have logL < -9
, but that step doesn't happen in any of the individual tests. Should we add that to test_spectral_grid
? I'm happy to do that and send links to updated files.
This sounds like a fine plan. Easiest if you do this in a separate PR just for this change. Then it should be quick to get merged.
I'm starting to look at how to do testing for subgrids and/or source density splitting. As a first step, I want to check the SED grids, which are created in
create_physicsmodel.py
. But it looks like tests are currently written to test each step of the code independently (the spectral grid, prior/grid weights, final extinguished grid). I could copy-paste the relevant parts ofcreate_physicsmodel
into a testing script, but the way it's written, it would require a fair amount of restructuring, to the point where I'd be concerned that the test isn't representative of what's actually happening in the original code. Would it be appropriate to just have a test that runscreate_physicsmodel
, and then do a bunch of assert statements afterwards for each of the intermediate file products?