Closed STSpencer closed 3 years ago
Hi Sam, I think that you just need to do the following to convert between what's on the axis of the reference (B&E) NSB spectrum (photons / (ns nm sr m2) and nLb.
Scale each point in the NSB vs Wavelength plot by the energy of the photon in each wavelength bin. Multiply by 10^9 (ns to s). This should result in Lumen (lm) / (nm sr m2)
Integrate --> lm / (sr m2). This is then luminous flux per solid angle... which is basically what a Lambert is.
A little wikipedia and I see that 1 Lb = 0.3183 lm / (sr * m2)
... I could of course be completely wrong!
Key is just to realise that you need to convert photon flux to photon power. I see above that you may have already realised all this and that I'm a step behind. Indeed the problem is just knowing the fraction of the NSB from the Gaia data that falls into our detectable wavelength range. But it's probably reasonable to just always use the shape of B&E and normalise the area to the Gaia level. I read this back and I'm not explaining it very well - will try when it's not 8 pm on a Friday :)
Can you only request data from the Gaia archive in a certain band? "The Blue Photometer (BP) operates in the wavelength range 330–680 nm;" ... can just assume a uniform distribution within that band (or follow shape of B&E of course.. some how assuming that B&E is not dominated by zodiacal light, and the scattering of star light from dust is proportional to the total gaia brightness in that band).
Ok - sorry for all this span! Here's an approximate method
Also when you reach units of photons/(ns sr nm m2), we can use this tool to complete the conversion to MHz (to stay consistent): https://github.com/sstcam/sstcam-simulation/blob/master/sstcam_simulation/utils/efficiency.py.
Right, I've performed NSB rate calculations on the three Eta Car fields using the mean method above and the scaled B+E approach (get_scaled_nsb_rate() method from sstcam_simulation). I've also updated my table writer code to include extra columns in Photons/(ns sr nm m**2) and Hz using the mean method, so updated tables with pixel NSB values for the whole camera can be found in the results folders on this repo (for example) here`
Summary of results: For the bright (normal observation of eta car field): Mean calculation method: 75.96 MHz Scaled B+E approach: 73.10 MHz
For the associated dark field: Mean calculation method: 42.97 MHz Scaled B+E approach: 36.12 MHz
For the 0.51 FLI eta car moonlit run: Mean calculation method: 79.18 MHz Scaled B+E approach: 70.84 MHz
Doing a rough calculation with Konrad's spreadsheet, the emission lines on their own contribute around 17MHz.
It seems as though we're reasonably happy with these Hz values being generated by the current scripts so I think we can close this issue for now.
I've been in touch with Konrad, and he's provided me with the spreadsheets he uses for calculating NSB rates in Hz, which I've spent some time digging through. They can be found in gnumerics format here (CTA standard password). These use sim_telarray data to calculate atmospheric correction factors and camera pixel area data, which in combination with an NSB spectrum is then used to calculate the NSB rate. We have this sim_telarray output data for CHEC-S+ASTRI, so this should be a decent place to start.
The problem I'm having is that Konrad's spreadsheets require the NSB input to be a spectrum in the format of Benn and Ellison (here), which has units of microJanskys/arcsec^2 as a function of wavelength in nanometers. This isn't what the NSB tool provides, as it relies upon gaia data (for which there's no spectral information) and the Krisciunas model (which is integrated over all wavelengths). So you get NSB output in the form of nanoLamberts (or equivalently 1e-5*pi(-1) candela meters(-2)) for a point on the sky or integrated nanoLamberts/pixel (from the pixel extraction code in fromfits.py). It's not immediately clear to me how to marry these different things up, does anyone have any ideas?