Closed jiwoncpark closed 4 years ago
Hi Ji Won, I see that the tests are failing. Is it possible for you to fix this? For your second point, I just added a routine in lenstronomy.Cosmo.cosmo_solver which does that (it's very simple). I also changed some definitions to be compatible with PEP8, so you might want to pull the latest lenstronomy version to see whether you can find use of it.
Hi Ji Won, I don't see the file h0rton/infer_h0_mcmc.py in the PR nor in the branch. Did you add it to git?
@sibirrer I've been very lazy with updating tests. I'll work on it today.
Profiler results:
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.003 0.003 33.416 33.416 infer_h0_mcmc.py:28(main)
1 0.000 0.000 23.171 23.171 fitting_sequence.py:49(fit_sequence)
1 0.000 0.000 23.170 23.170 fitting_sequence.py:188(mcmc)
1 0.000 0.000 23.167 23.167 sampler.py:114(mcmc_emcee)
1 0.000 0.000 23.165 23.165 ensemble.py:360(run_mcmc)
41 0.001 0.000 23.165 0.565 ensemble.py:199(sample)
81 0.014 0.000 23.114 0.285 ensemble.py:392(compute_log_prob)
6970 0.030 0.000 23.085 0.003 ensemble.py:543(__call__)
6970 0.025 0.000 23.055 0.003 likelihood.py:140(logL)
40 0.018 0.000 22.489 0.562 red_blue.py:52(propose)
6962 0.087 0.000 22.120 0.003 likelihood.py:153(log_likelihood)
6962 0.109 0.000 12.389 0.002 mcmc_utils.py:157(evaluate)
6962 0.029 0.000 11.449 0.002 gaussian_nll.py:244(__call__)
6962 1.229 0.000 11.200 0.002 gaussian_nll.py:140(nll_mixture)
13924 5.533 0.000 8.468 0.001 gaussian_nll.py:78(nll_low_rank)
6962 0.037 0.000 6.809 0.001 position_likelihood.py:52(logL)
6962 0.679 0.000 6.437 0.001 position_likelihood.py:150(source_position_likelihood)
1 0.000 0.000 4.503 4.503 train_val_config.py:32(from_file)
1 0.000 0.000 4.502 4.502 train_val_config.py:16(__init__)
1 0.007 0.007 3.671 3.671 train_val_config.py:108(set_XY_metadata)
The custom_logL_addition
function (the BNN loss function) takes quite a bit of time. I expected the CPU-GPU-CPU data transfers to be bad. I was planning to make a numpy version of the loss function anyway for testing and replacing the GPU loss function with the numpy version will help speed up the script. Until then, I'll run inference on the CPU.
This branch implements the hybrid method, i.e. the method that further optimizes the BNN posterior with a likelihood on the image positions (feeding the observed image positions as data with a reasonable astrometric uncertainty of ~5mas) jointly with the likelihood of time delays.
The main script is
h0rton/infer_h0_mcmc.py
(analogous toh0rton/infer_h0.py
) which uses utility functions inh0rton/h0_inference/h0_utils.py
and should be very similar to thecatalogue modelling.ipynb
notebook in Lenstronomy Extensions.I expect this PR thread to include a discussion of speed as well as a science review. Engineering-wise, I had questions about implementing the following necessary functionalities:
infer_h0.py
as well, if we're going to be sampling in D_dt space.