Jammy2211 / PyAutoLens

PyAutoLens: Open Source Strong Gravitational Lensing
https://pyautolens.readthedocs.io/
MIT License
159 stars 32 forks source link

Slam pipeline positions not able to update from previous search #270

Closed AstroAaron closed 4 months ago

AstroAaron commented 4 months ago

Hello,

I am running the SLAM pipeline from parametric to pixelization for an interferometer dataset. I modified the script to use the tracer mass of the mass_totalrun of a different dataset for the parametric model search, i.e. fix the mass distribution to be exactly the same as the end result from another mass_total run. Unfortunately, when completing the source_lp run, the code breaks when trying to get the position_likelihood for the next analysis object:

 Traceback (most recent call last):
  File "/projects/ag-riechers/users/aaron/ALMACE/PyAutoLensNN/source_pixelized_2LensesNewOptVelocityReconstruction.py", line 228, in <module>
    positions_likelihood=source_lp_results.last.positions_likelihood_from(
  File "/projects/ag-riechers/users/aaron/ALMACE/PyAutoLensNN/PyAutoLens/autolens/analysis/result.py", line 174, in positions_likelihood_from
    self.image_plane_multiple_image_positions
  File "/projects/ag-riechers/users/aaron/ALMACE/PyAutoLensNN/PyAutoLens/autolens/analysis/result.py", line 99, in image_plane_multiple_image_positions
    multiple_images = solver.solve(
  File "/projects/ag-riechers/users/aaron/ALMACE/PyAutoLensNN/PyAutoLens/autolens/point/point_solver.py", line 351, in solve
    self.grid_with_coordinates_to_mass_profile_centre_removed_from(
  File "/projects/ag-riechers/users/aaron/ALMACE/PyAutoLensNN/PyAutoLens/autolens/point/point_solver.py", line 71, in grid_with_coordinates_to_mass_profile_centre_removed_from
    for centre in centres.in_list:
AttributeError: 'NoneType' object has no attribute 'in_list'

The individual slam scripts are the same as in runs where this is not an issue. I am using the PyAutoLens version that still uses planes. The relevant script I use is:

positions_likelihood = al.PositionsLHPenalty(positions=positions, threshold=1.)

analysis = al.AnalysisInterferometer(dataset=dataset,positions_likelihood=positions_likelihood, settings_inversion=al.SettingsInversion(use_linear_operators=True))

tracer=al.from_json(tracerfile)

source_lp_results = slam.source_lp2Lenses.run(
    settings_search=settings_search,
    analysis=analysis,
    lens_bulge_0=None,
    lens_disk_0=None,
    lens_bulge_1=None,
    lens_disk_1=None,
    mass_0=tracer.planes[0].galaxies[0].mass,
    mass_1=tracer.planes[0].galaxies[1].mass,
    shear=tracer.planes[0].galaxies[0].shear,
    source_bulge=af.Model(al.lp.Sersic),
    redshift_source=redshift_source,
    redshift_lens_0=redshift_lens_0,
    redshift_lens_1=redshift_lens_1,
)

settings_inversion = al.SettingsInversion(use_linear_operators=True,
        image_mesh_min_mesh_pixels_per_pixel=3,
    image_mesh_min_mesh_number=5,
    image_mesh_adapt_background_percent_threshold=0.1,
    image_mesh_adapt_background_percent_check=0.8,)

analysis = al.AnalysisInterferometer(
    dataset=dataset,
    positions_likelihood=source_lp_results.last.positions_likelihood_from(
        factor=3.0, minimum_threshold=0.2
    ),
    settings_inversion=settings_inversion,
)

source_pix_results = slam.source_pix2LensesOptBest.run(
    settings_search=settings_search,
    analysis=analysis,
    source_lp_results=source_lp_results,
    image_mesh=al.image_mesh.Hilbert,
    mesh=al.mesh.VoronoiNN,
    regularization=al.reg.AdaptiveBrightnessSplit,
)
Jammy2211 commented 4 months ago

Are the positions input here:

positions_likelihood = al.PositionsLHPenalty(positions=positions, threshold=1.)

Accurate and robust and something you trust to use for all fits?

AstroAaron commented 4 months ago

They are good enough within the threshold: image_with_positions

I am not doing this yet for all fits. This is just for a single velocity bin. Still have to find a way how to automatically determine the positions before the source_lp run for the future datasets to be fitted.

Jammy2211 commented 4 months ago

Is there any way to specify all positions manually? The position solver going on behind the scenes is not 100% reliable yet (we're working on fixing this atm) so doing it by hand and getting positions you therefore trust is a good shout (this is what I do).

With these lines of code:

    positions_likelihood=source_lp_results.last.positions_likelihood_from(
        factor=3.0, minimum_threshold=0.2
    ),

You can actually do this if you have positions already:

    positions_likelihood=source_lp_results.last.positions_likelihood_from(
        factor=3.0, minimum_threshold=0.2, positions=positions
    ),

This will use the input positions (e.g. it will skip the position solver) but still update the threshold using these positions.

This is more robust and should fix the bug posted above, so try that :).

No idea whats causing the bug.

AstroAaron commented 4 months ago

Thank you for the input, I will do it that way then! This also means I will need to look into how to automatically derive the positions for all my datasets for the manual imput. Ideally without needing to image each velocity bin manually.

Another questions, would it be a bad idea to use the positions derived from the continuum image? This would mean that ~2 positions that are manually defined are not corresponding to emission in the current dataset. It would still be valid for the mass model though. Like here with black dots on the right part of the image: image_with_positions

Jammy2211 commented 4 months ago

This also means I will need to look into how to automatically derive the positions for all my datasets for the manual imput.

Can you not just use this GUI or manual input script:

https://github.com/Jammy2211/autolens_workspace/blob/release/scripts/imaging/data_preparation/gui/positions.py https://github.com/Jammy2211/autolens_workspace/blob/release/scripts/imaging/data_preparation/examples/optional/positions.py

Another questions, would it be a bad idea to use the positions derived from the continuum image? This would mean that ~2 positions that are manually defined are not corresponding to emission in the current dataset. It would still be valid for the mass model though. Like here with black dots on the right part of the image:

I think it would be fine, the line:

    positions_likelihood=source_lp_results.last.positions_likelihood_from(
        factor=3.0, minimum_threshold=0.2, positions=positions
    ),

Is always updating the distance threshold to be 3.0 times the best solution they trace too previously... so even if your positions are not great the threshold will adapt to prevent it from rejecting plausible mass models.

AstroAaron commented 4 months ago

Can you not just use this GUI or manual input script: That would mean imaging over 466 individual velocity bins (because I am doing this for several molecular emission lines that are quite broad) and then starting the GUI script/finding the positions for each of them. I will probably instead write some code that can find the brightest pixel in a restoring beam sized area within some rectangular or more complicated mask and for this to be done automatically for the binned cubes.

I think it would be fine, the line:

    positions_likelihood=source_lp_results.last.positions_likelihood_from(
        factor=3.0, minimum_threshold=0.2, positions=positions
    ),

Is always updating the distance threshold to be 3.0 times the best solution they trace too previously... so even if your positions are not great the threshold will adapt to prevent it from rejecting plausible mass models.

I will try this and see how the solutions looks like. Would love to go that way, fingers crossed.