Open wadawson opened 6 years ago
Here is the link to the simulations of LSST's sensitivity of intermediate mass black holes including paralensing signal: https://my.syncplicity.com/share/rkyf6hfhooe1kt1/sim_truths_alt_sched_RA0Decn15.csv password: lsstsimdata
** Explanation of fields in Will's file:
fieldID: irrelevant
seq_n: ID number
lens_signal: irrelevant
y_0: impact parameter in Einstein radii
t_e: Eisntein crossing time in days
t_0: peak time in MJD (modified julian date)
m_lens
d_d = d_lens [in kpc]
theta_e: Eistein radius (radians)
v_t: transverse velocity
lambda: celestial longitude of the source star [degrees]
beta: celestial latitude of the source star [degrees]
t_c: time of closest approach [when Earth is closest to the Sun-source line]
t_p: time of perihelion of Earth's orbit
snr_full: full SNR of all lensing signal
snr_plax: SNR due to parallax only
Observations are in 30-second spans, this is all for a 23-magnitude star at 8-kpc (in the bulge). A typical error is ~ 0.08-0.3 magnitude, mostly due to sky background. LSST expects to observe 1-8 billions stars this faint or brighter.
Optical depth ref: https://arxiv.org/abs/astro-ph/0502363
A computation of the LSST PBH constraints as a function of mass:
The code to make the figure:
import numpy as np
import matplotlib.pyplot as plt
def observed_fraction(csvdata, snrcut=5, full=True, nm=60):
"""Find the expected number of events. snrcut is the snr to require for a detection.
if full == True, then use the full SNR, otherwise use the parallax SNR."""
csvfile = open(csvdata)
snr = csv.reader(csvfile)
data = np.array([[float(elem) for elem in row] for row in snr])
mlens = data[:,6]
if full:
snr_full = data[:,-2]
else:
snr_full = data[:,-1]
ii = np.where(snr_full > snrcut)
mbins = np.logspace(np.log10(np.min(mlens)), np.log10(np.max(mlens)), nm)
hist = np.histogram(mlens[ii], mbins)
hist2 = np.histogram(mlens, mbins)
mbins2 = (hist[1][1:] + hist[1][:-1])/2
return mbins2, hist[0]/hist2[0]
def number_of_events(f_pbh):
"""Find expected number of events from PBHs.
NOTE this neglects any potential dependence on optical depth
from PBH mass for large PBHs which have an einstein radius
large enough so that the crossing time is longer than survey time."""
#This is a lower bound (Will Dawson, private comm. :) )
number_of_stars = 1e9
#Polchinski 96 says 5e-7 to the LMC
#We use Sumi 2005 et al (Ogle)
optical_depth = 4.48e-6
total = optical_depth * number_of_stars * f_pbh
return total
def make_fig():
"""Make the figure!"""
hist_full = observed_fraction("sim_truths_alt_sched_RA0Decn15.csv", full=True)
hist_par = observed_fraction("sim_truths_alt_sched_RA0Decn15.csv", full=False)
ev = number_of_events(1)
plt.semilogx(hist_full[0], 1e2/(ev * hist_full[1]), label="Full SNR")
plt.semilogx(hist_par[0], 1e2/(ev * hist_par[1]), label="Parallax SNR")
plt.ylabel(r"$f_\mathrm{PBH}$ (x100)")
plt.xlabel(r"$M_\mathrm{PBH}$")
plt.title("LSST constraints")
plt.legend(loc='upper right')
We note that a strong upper bound on the PBH mass is the mass of the smallest dwarf galaxy, which is around 10 to 7 - 10 to 8 solar.
Here is a simulation with extended mass range 0.01-1e8 solar mass: 2000 samples https://my.syncplicity.com/share/ifcuz25ybkvdhsg/sim_truths_OpSimRA0Decn15_2000samp 20000 samples https://my.syncplicity.com/share/wy8xfwj2gamiavw/sim_truths_OpSimRA0Decn15_20000samp password: lsstsimdata
Some notes on assumptions in the simulations that will affect the validity of the ultimate conclusions:
First and foremost, the stellar variability noise has not been modeled. The only noise that is incorporated is the photometric measurement noise as determined from the 5 sigma limiting magnitude of each epoch in OpSim.
It is assumed that the optimal signal matched filter is used for each event. Something close to this is achievable even if it is not know a priori, however with significant computational investment. This is avoided for the purposes of this hack.
Initially we were using a uniform distribution in the lens distance from earth to the bulge.
There may be an issue with the likelihood ratio of the full signal and the uniform gaussian, due to the overall offset of the microlensing signal from the baseline flux. However this will not affect the parallax SNR.
Extended mass range constraints with 20000 points:
zipped together 8 files that each have 25k PBHs
These assume a triangular density distribution (ideally we'd do a 1/r distribution), where the p(r=0) = 0 and p(r=8) = 0.25.
EDIT: forgot to set seed, so rerunning now
new zip:
Plot from the full dataset:
and this was the code:
import glob
import numpy as np
import matplotlib.pyplot as plt
def observed_fraction(csvdata, snrcut=5, full=True, nm=60):
"""Find the expected number of events. snrcut is the snr to require for a detection.
if full == True, then use the full SNR, otherwise use the parallax SNR."""
files = glob.glob(csvdata)
mlens = np.array([])
snr_full = np.array([])
for cvd in files:
csvfile = open(cvd, 'r')
#Skip first line
csvfile.readline()
snr = csv.reader(csvfile)
data = np.array([[float(elem) for elem in row] for row in snr])
mlens = np.concatenate([data[:,6], mlens])
if full:
tsnrs = data[:,-2]
else:
tsnrs = data[:,-1]
snr_full = np.concatenate([tsnrs, snr_full])
ii = np.where(snr_full > snrcut)
mbins = np.logspace(np.log10(np.min(mlens)), np.log10(np.max(mlens)), nm)
hist = np.histogram(mlens[ii], mbins)
hist2 = np.histogram(mlens, mbins)
mbins2 = (hist[1][1:] + hist[1][:-1])/2
return mbins2, hist[0]/hist2[0]
def number_of_events(f_pbh):
"""Find expected number of events from PBHs.
NOTE this neglects any potential dependence on optical depth
from PBH mass for large PBHs which have an einstein radius
large enough so that the crossing time is longer than survey time."""
#This is a lower bound (Will Dawson, private comm. :) )
number_of_stars = 1e9
#Polchinski 96 says 5e-7 to the LMC
#We use Sumi 2005 et al (Ogle)
optical_depth = 4.48e-6
total = optical_depth * number_of_stars * f_pbh
return total
def make_fig(csvdata, nbins=60, snrcut=5):
"""Make the figure!"""
hist_full = observed_fraction(csvdata, snrcut=snrcut, full=True, nm=nbins)
hist_par = observed_fraction(csvdata, snrcut=snrcut, full=False, nm=nbins)
ev = number_of_events(1)
plt.semilogx(hist_full[0], 1e2/(ev * hist_full[1]), label="Full SNR")
plt.semilogx(hist_par[0], 1e2/(ev * hist_par[1]), label="Parallax SNR")
plt.ylabel(r"$f_\mathrm{PBH}$ (x100)")
plt.xlabel(r"$M_\mathrm{PBH}$")
plt.title("LSST constraints")
plt.legend(loc='upper right')
plt.tight_layout()```
There is something weird below a mass of 10^-2 where the parallax only signal is larger than the full signal. This could easily be investigated by plotting some of the time curves for these low mass microlensing events and visualizing the data.
Exploring a case at the very low mass regime.
In particular, at the outset, we discussed stellar dynamic, microlensing, and CMB constraints of primordial black holes (largely due to the parties involved). We are open to broadening consideration to the full range of primordial black hole constraints.