Open amn3142 opened 6 years ago
High-resolution imaging is required because most galaxy-scale lenses have Einstein ring radii between 0.25 and 2 arcsec. The mass modeling that is needed for these systems, in order to get meaningful constraints for dark matter, requires the ability to resolve close quasar images (separations much less than an arcsec) or to resolve an Einstein ring / arc in the radial direction. So far we have used either HST imaging or grism observations or Keck adaptive optics (AO) imaging or IFU spectroscopy. For the ground-based telescopes such as Keck, AO observing is required in order to correct for the atmosphere.
Spectroscopy is also a critical ingredient for using galaxy-scale lensing as a dark-matter probe, and both the lensing galaxy and background source redshift are needed in order to convert the angular information from the lens models into physical units such as substructure masses. For this aspect of spectroscopy, long-slit observations are fine. Note, though, that these redshifts are at time difficult to obtain, even with a 10-m telescope. This difficulty arises especially for small-separation quasar lens systems where the quasar light overwhelms the emission from the lensing galaxy, or for the non-quasar lenses where one or both objects are faint and/or at high enough redshift to push possible emission lines out of the optical window.
A spectroscopy-based technique that is being used for dark-matter studies utilizes integral field unit spectroscopy behind an AO system. In this technique, the idea is to get the fluxes of the individual lensed quasar images in wavelength ranges corresponding to narrow-line emission from the lensed AGN (see Nierenberg et al. 2014, 2017). This emission should be free of microlensing effects and, thus, should provide the true flux ratios of the lensed images.
One issue is how to select the sample of lenses that are followed up? What precision follow-up is needed? Which lenses are best for follow-up? LSST will find many new lenses but some will be faint, and likely require the next generation of telescopes (TMT/ELT/JWST) for follow-up, so it is worth some consideration.
Using rest-frame mid-IR high resolution imaging of lensed quasar systems is a technique that is similar to the narrow-line IFU technique, in the sense that the mid-IR emission should be coming from a region that is large enough to avoid significant microlensing contamination. These kind of observations could be done by JWST.
What other things would help with lens detection? For qso lenses, wide field IR imaging is helpful for selecting QSOs. Galaxy source searches rely on colors and visual detection and may not require ancillary data for target selection.
Another lens-finding technique, which takes advantage of LSST's time-domain component, is to search for extended objects in difference imaging. See this paper by Chris Kochanek The necessary inputs should, at some level, come "for free" from the LSST data processing
The DESC strong lensing science collaboration is considering some of these issues -- namely finding lenses and defining a sample for follow-up (high-resolution imaging and spectroscopy). They are interested in variable quasar lenses for time delay cosmography. There is some connection, but the issues here are broader.
The following people have expressed interest in contributing to various topics related to probing dark matter with strong lensing: Thomas Collett Asantha Cooray Simon Dye Nicola Napolitano Tony Tyson Aprajita Verma Risa Wechsler
Right now, there is are ongoing searches for strongly lensed quasars in DES, SDSS, ATLAS, and HSC. Based on my experience with the lens candidates coming out of the STRIDES collaboration and associated work, the first level of follow-up is relatively shallow spectroscopy and/or AO imaging with 4-10m class telescopes (SOAR, NTT, WHT, Magellan, Keck, for this collaboration) to confirm or reject lens candidates selected from the survey data. We typically can go through ~20 candidates per night.
For lens modeling, it is important to characterize the environment and line of sight for each lens. We need to find perturbers that are projected close to the lens, estimate the amount of external shear and convergence, and determine whether there are any significant redshift effects. LSST imaging should be sufficient for a first-order approximation, following the example of H0LiCOW. (More detailed analysis to build full 3-d lens models would require more work; current efforts in that direction use a lot of spectroscopy, but it remains to be seen how much could be done with photo-z's.)
One thing that is very important for the lensing methods is how to relate the masses that the lensing measures to the masses that come out of the simulations. I think that the first step is for the lensing community to come up with a standard way of reporting their masses, which may be different between the flux-ratio approach and the gravitational imaging approach. At that point, though, I think that it would be great if the simulators could extract the same mass measurements from their simulations, even if that is not their standard way of reporting masses, so that we can compare apples to apples. It seems more proper to do it this way than to have the observers try to convert their measurements to a 'simulations standard', since the simulators know the true mass distributions.
As we push down the mass spectrum, do we need to worry about globular clusters?
That question is particularly relevant for substructure lensing, which is a specific application of strong lensing -- and one of the identified "measurements" in the dark matter graphic.
This is a great thread and look forward to hearing more about what you guys discussed and worked on at the workshop. Here are a few (late) comments
Lens finding is a key issue that's being worked on by the joint effort of the DESC-Strong Lensing Working Group (focussed on time variable sources) and the Strong Lensing Science Collaboration (all lenses). Distilling Anna's questions, I think it's useful to separating general lens finding, that includes both variable and non-variable lenses covering a wide parameter space, from selecting the 'best' lens systems for dark matter analyses and those that are 'best' follow up targets for this science, that seems to be the part to focus on here as well as what the definition of 'best' is! As Chuck mentioned, the general lens finding strategy is being developed (part of the DESC-SL Roadmap and the focus of SLSC work) and can feed the DM studies with targets. If we can define "best" or "good" in terms of basic lens parameters derived from the lsst data alone, we can start to flag promising targets for different DM studies with lenses as we find them.
Totally agree with the comments on follow-up
For lens modelling, what's the angular size of field needed around the lens when you say "close to the lens".
As per the discussion on Monday, we want to emphasize that LSST is going to be fantastic for, e.g., finding lenses, but also that (nearly?) all of the science will require follow-up with other facilities (spectroscopy, high-resolution imaging, rest-frame mid-IR imaging, etc.). As a clarification, this discussion is focused on galaxy-scale strong lens systems