VERITAS-Observatory / EventDisplay_v4

A reconstruction and analysis pipeline for VERITAS.
BSD 3-Clause "New" or "Revised" License
4 stars 1 forks source link

Reflectivity measurements #49

Closed mireianievas closed 3 years ago

mireianievas commented 4 years ago

A thread to investigate reflectivity measurements.

So far I have compiled the measurements from 3 sites:

image

mireianievas commented 4 years ago

I tweaked a bit the colors and tried to divide the DB values to make them 'roughtly fit' the T-factors from Tony,

image

It actually does not look too bad. The only missing question is why the old 'facet' measurements from 2014 are so high, I will try to do those averages myself to check them.

More data I'm gathering to try to put everything together:

Different spectra: reflectivity of the mirrors as simulated for V6 together with Cherenkov spectrum, NSB spectrum, PMT QE (I got it from the filter webpage, so it is probably the old PMTs, but one gets an idea where they peak) and a standard Johnson-Cousins B-filter (similar to the one used for WDR).

I think with this I can try to get the old data and fold it myself by the B-filter to get a reflectivity value.

image

Individual reflectivities that the DB stores (I think they correspond to different times and mirrors)

image

mireianievas commented 4 years ago

This is pretty worrying,

https://veritas.sao.arizona.edu/wiki/index.php/Mirror_Reflectivity

As you can see in the different reflectivity curves, the peaks have move significantly, from ~350 nm to about 450 nm between 2009 and 2016. So here's the thing, in order to compare with the only B-filter measurements, how should we weight them?

mireianievas commented 4 years ago

I took the patience to scan all the average curves from that wikipage,

Here you can find ~half of them, for reference

image

Then I averaged the reference (GRISU) reflectivity with the Cherenkov spectrum as weight (so folding with it) and the different curves mentioned before to obtain a single value, that I plot instead of the values that someone put in the wiki. This is what I get, remember that the recent DB values have been upscaled artificially (and by eye) to match the T-factors from Tony.

image

Still something is weird about 2014-2016, the two methods (individual panels with full wavelength- dependent reflectivity measurements and WDR) don't match, at all.

GernotMaier commented 4 years ago

Lots of work - really important to understand what is going on.

mireianievas commented 4 years ago

About the last point. With those derived by myself you mean the values in the DB? Those approximately match, but 2019 is missing and there is seems to be some (minor?) differences we should understand. Perhaps it is just the way Tony normalized them, I will have a look tomorrow.

About the especular and diffuse reflectivity, makes total sense. This means we basically we cannot do much better than this, right?

mireianievas commented 4 years ago

Continuing on this ...

Do we know where the absolute gains are stored?. There's a beautiful table in the VOFFLINE DB with the LaserRun per telescope. However, this stops in 2015-04-23 ... Are they stored in any other place and kept up-to-date?

image

mireianievas commented 4 years ago

Absolute gains from single PE measurements (Qi's link on http://veritash.sao.arizona.edu:8081/AnalysisAndCalibration/4128 )

image

mireianievas commented 4 years ago

To take into account for future estimations: There is a scaling in the CARE sims for the gains and the Winston cones to be taken into account when computing the T and g factors. I completely ignored this until now in this thread (note that Tony's values, the ones we were testing so far, should be fine).

* TLCFG 0  1 0 0 0.93  0.97  0.0
* TLCFG 1  2 0 0 0.93  1.0  0.0
* TLCFG 2  3 0 0 0.93  0.98  0.0
* TLCFG 3  4 0 0 1.00  0.95  0.0

(https://veritas.sao.arizona.edu/wiki/images/0/02/CARE_VERITAS_AfterPMTUpgrade_V6_140916.txt)

mireianievas commented 4 years ago

There're also reference values for the gain under VERITAS.Epochs.runparameter which seem to be used by ED, so I am using those instead of the file mentioned in the previous entry. With them, I get the following g-factors from single PE measurements.

image

filtered means filtered and smoothed through scipy.signal.savgol_filter

mireianievas commented 4 years ago

T-factors I derive from David Hanna's WDR

image

So now it should be easy to compute the S-factors. Note really fast drop in reflectivity in autumn 2014, maybe that's the key for the overestimated fluxes @GernotMaier.

GernotMaier commented 4 years ago

Note that the absolute gains (dc/pe) for each VERITAS epoch listed in VERITAS.Epochs.runparameter are not used anywhere in the evndisp analysis. So they shouldn't be applied in this context here.

(they were required for the model3D analysis - a project which is not followed up anymore)

GernotMaier commented 4 years ago

Sorry, should have read your entry more carefully: yes, it is fine to use them for the g-factor determination. The origin of these values is documented: http://veritash.sao.arizona.edu:8081/AnalysisAndCalibration/3173

GernotMaier commented 4 years ago

To take into account for future estimations: There is a scaling in the CARE sims for the gains and the Winston cones to be taken into account when computing the T and g factors. I completely ignored this until now in this thread (note that Tony's values, the ones we were testing so far, should be fine).

* TLCFG 0  1 0 0 0.93  0.97  0.0
* TLCFG 1  2 0 0 0.93  1.0  0.0
* TLCFG 2  3 0 0 0.93  0.98  0.0
* TLCFG 3  4 0 0 1.00  0.95  0.0

(https://veritas.sao.arizona.edu/wiki/images/0/02/CARE_VERITAS_AfterPMTUpgrade_V6_140916.txt)

Is this page linked to https://veritas.sao.arizona.edu/wiki/index.php/CARE ?

We really have to make sure that we take the right CARE configuraiton.

GernotMaier commented 4 years ago

On the reflectivity ratios: I don't understand how mirrors can get so bad in a few months. Has that been discussed before?

But it is clear that if there is a difference of 10-15% between the beginning and end of a season, we should discuss if we need to divide the season into further parts.

Given that we have all the machinery now in scripts, this would just mean a bit more computing (e.g., we could handle a factor of 2-3 more epochs).

Great work - very interested in seeing how this compares with the correction values we are using until now.

mireianievas commented 4 years ago

I need to check the numbers carefully, yes.

Regarding the rapid degradation, was there any major sandstorm? I remember that when I was working in the site selection for CTA there were some pretty impressive sandstorms around 2013/2014/2015 (I don't remember exactly which year) in Arizona, could it be that? Is there any record on that?

GernotMaier commented 4 years ago

No records of this - and I don't remember ever the mentioning of dust/sand storms in the context of VERITAS (not that the CTA site was 6-8h north of the VERITAS site)

mireianievas commented 4 years ago

Some more work on this topic:

image

The corresponding g-factors are

#### g factors
season       mjd_mean  width/d    g[1]    err[1]    g[2]    err[2]    g[3]    err[3]    g[4]    err[4]
---------  ----------  -------  ------  --------  ------  --------  ------  --------  ------  --------
2010-2011     55591.5    151.5   1.001     0.058   1.034     0.034   0.975     0.033   0.991     0.038
2011-2012     55957      152     0.992     0.048   1.008     0.044   0.991     0.033   1.011     0.039
2012-2013     56322.5    151.5   0.99      0.087   1.033     0.027   0.986     0.045   1.007     0.037
2013-2014     56687.5    151.5   0.936     0.028   0.983     0.028   0.896     0.042   0.969     0.053
2014-2015     57052.5    151.5   0.893     0.033   0.957     0.039   0.877     0.031   0.916     0.047
2015-2016     57418      152     0.898     0.026   0.935     0.02    0.892     0.036   0.922     0.029
2016-2017     57783.5    151.5   0.875     0.023   0.95      0.026   0.892     0.031   0.922     0.026
2017-2018     58148.5    151.5   0.926     0.026   0.974     0.033   0.918     0.037   0.925     0.023
2018-2019     58513.5    151.5   0.903     0.026   0.96      0.032   0.9       0.02    0.878     0.044
2019-2020     58879      152     0.924     0.013   0.994     0.011   0.953     0.019   0.956     0.082

image

Again the seasonal averages

#### T factors
season       mjd_mean  width/d    T[1]    err[1]    T[2]    err[2]    T[3]    err[3]    T[4]    err[4]
---------  ----------  -------  ------  --------  ------  --------  ------  --------  ------  --------
2011-2012     55957      152     1         0       1         0       1         0       1         0
2012-2013     56322.5    151.5   0.997     0.05    1.016     0.05    0.945     0.05    0.916     0.05
2013-2014     56687.5    151.5   0.947     0.011   0.986     0.012   0.844     0.016   0.894     0.018
2014-2015     57052.5    151.5   0.786     0.024   0.852     0.018   0.779     0.008   0.819     0.017
2015-2016     57418      152     0.8       0.016   0.813     0.007   0.741     0.008   0.848     0.011
2016-2017     57783.5    151.5   0.739     0.009   0.813     0.01    0.699     0.041   0.783     0.014
2017-2018     58148.5    151.5   0.723     0.013   0.781     0.016   0.711     0.026   0.763     0.023
2018-2019     58513.5    151.5   0.692     0.011   0.712     0.017   0.662     0.01    0.711     0.012
2019-2020     58879      152     0.677     0.012   0.721     0.012   0.742     0.02    0.694     0.008

And finally, the S-factors, which I calculated both multiplying the predictions from the splines and the averages for the g- and T- factors

image

#### S factors
season       mjd_mean  width/d    S[1]    err[1]    S[2]    err[2]    S[3]    err[3]    S[4]    err[4]
---------  ----------  -------  ------  --------  ------  --------  ------  --------  ------  --------
2011-2012     55957      152     0.992     0.048   1.008     0.044   0.991     0.033   1.011     0.039
2012-2013     56322.5    151.5   0.988     0.1     1.049     0.059   0.932     0.065   0.923     0.061
2013-2014     56687.5    151.5   0.887     0.029   0.969     0.03    0.756     0.038   0.866     0.051
2014-2015     57052.5    151.5   0.702     0.034   0.815     0.038   0.684     0.025   0.75      0.042
2015-2016     57418      152     0.719     0.026   0.76      0.017   0.661     0.028   0.782     0.026
2016-2017     57783.5    151.5   0.647     0.018   0.773     0.023   0.624     0.043   0.722     0.025
2017-2018     58148.5    151.5   0.669     0.022   0.761     0.03    0.653     0.036   0.706     0.028
2018-2019     58513.5    151.5   0.625     0.021   0.683     0.028   0.595     0.016   0.625     0.033
2019-2020     58879      152     0.626     0.014   0.717     0.014   0.707     0.024   0.663     0.058

The comparison with the values we were using until now:

GernotMaier commented 4 years ago

Ok - this is very nice and seems to have the right trend. We should discuss what to do about the periods with very strong gradients (2013..2015). Do we want to divide them into two? The change during a period should maybe be of the same order as the error bars?

You've developed quite a few nice scripts to process this. Can we make sure that they are somehow preserved? As this is a cross-analysis topic, maybe add a new repo on the VERITAS github?

mireianievas commented 4 years ago

About the years with large gradients, we can try them up, sure. Still not fully sure if that will help with the 2014/2015 high crab flux since Crab is mostly visible from November (I guess those are the 'summer' runs ??) and there the reflectivity already has dropped a lot. But we can try them out and it may in any case help with other sources. Should I make a temptative list of periods with those changes and put some IRF jobs in the queue to test this or do you prefer to do it yourself @GernotMaier ?

About the tools. For now it is a set of messy jupyter notebooks, but I can dedicate some time to tidy and document them a bit to make sure we can simplify any future analysis.

All this is starting to seriously call for a paper, should I open an overleaf to start putting things together?

GernotMaier commented 4 years ago

Yes to both questions.

On 3. Apr 2020, at 09:38, Mireia Nievas-Rosillo notifications@github.com wrote:

About the years with large gradients, we can try them up, sure. Still not fully sure if that will help with the 2014/2015 high crab flux since Crab is mostly visible from November (I guess those are the 'summer' runs ??) and there the reflectivity already has dropped a lot. But we can try them out and it may in any case help with other sources. Should I make a temptative list of periods with those changes and put some IRF jobs in the queue to test this or do you prefer to do it yourself @GernotMaier ?

About the tools. For now it is a set of messy jupyter notebooks, but I can dedicate some time to tidy and document them a bit to make sure we can simplify any future analysis.

All this is starting to seriously call for a paper, should I open an overleaf to start putting things together?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.


Dr Gernot Maier CTA Deutsches Elektronen-Synchrotron DESY Ein Forschungszentrum der Helmholtz-Gemeinschaft Platanenallee 6, D-15738 Zeuthen, Germany

Tel.: +49 33 7627 7598 Internet: www.desy.de/cta Besucheradresse: 1X07

mireianievas commented 4 years ago

Ok, I have redefined the bins to make them overlap a bit (in order to smooth out the features and to take into account the reflectivity measurements are rather sparse in time) and introduced some summer bins in 2013-2015. Then I compute a suggested MJD range to apply those measurements.

Gains

image

#### g factors
season       mjdmean    width  range_mjd      g[1]    err[1]    g[2]    err[2]    g[3]    err[3]    g[4]    err[4]
---------  ---------  -------  -----------  ------  --------  ------  --------  ------  --------  ------  --------
2010-2011    55591.5    151.5  55408-55774   1.018     0.058   1.043     0.034   0.964     0.033   0.989     0.038
2011-2012    55957      152    55774-56101   0.991     0.048   1.001     0.044   0.998     0.033   1.023     0.039
2012-2013    56246      137    56101-56367   0.965     0.098   1.04      0.031   0.992     0.049   1.006     0.042
2013-2013    56489.5    137.5  56367-56588   0.94      0.052   1.02      0.015   0.916     0.044   0.994     0.053
2013-2014    56687.5     90.5  56588-56778   0.938     0.02    0.989     0.025   0.915     0.035   0.944     0.032
2014-2014    56870      122    56778-56968   0.89      0.017   0.949     0.024   0.872     0.035   0.95      0.031
2014-2015    57067.5    136.5  56968-57242   0.891     0.033   0.957     0.039   0.869     0.031   0.911     0.047
2015-2016    57418      152    57242-57600   0.887     0.026   0.939     0.02    0.886     0.036   0.935     0.029
2016-2017    57783.5    151.5  57600-57966   0.877     0.023   0.954     0.026   0.889     0.031   0.929     0.026
2017-2018    58148.5    151.5  57966-58331   0.93      0.026   0.958     0.033   0.905     0.037   0.936     0.023
2018-2019    58513.5    151.5  58331-58696   0.893     0.026   0.951     0.032   0.905     0.02    0.862     0.044
2019-2020    58879      152    58696-59061   0.924     0.013   0.994     0.011   0.953     0.019   0.956     0.082

Reflectivity

image

#### T factors
season       mjdmean    width  range_mjd      T[1]    err[1]    T[2]    err[2]    T[3]    err[3]    T[4]    err[4]
---------  ---------  -------  -----------  ------  --------  ------  --------  ------  --------  ------  --------
2011-2012    55957      152    55812-56101   1         0       1         0       1         0       1         0
2012-2013    56246      137    56101-56367   0.987     0.05    1.011     0.05    0.958     0.05    0.931     0.05
2013-2013    56489.5    137.5  56367-56588   0.969     0.05    1.013     0.05    0.902     0.05    0.878     0.05
2013-2014    56687.5     90.5  56588-56778   0.947     0.004   0.99      0.005   0.852     0.005   0.882     0.007
2014-2014    56870      122    56778-56968   0.887     0.044   0.926     0.046   0.821     0.025   0.87      0.037
2014-2015    57067.5    136.5  56968-57242   0.796     0.024   0.858     0.005   0.778     0.006   0.82      0.012
2015-2016    57418      152    57242-57600   0.797     0.011   0.814     0.005   0.742     0.007   0.843     0.007
2016-2017    57783.5    151.5  57600-57966   0.739     0.008   0.815     0.008   0.693     0.007   0.783     0.007
2017-2018    58148.5    151.5  57966-58331   0.727     0.008   0.781     0.011   0.72      0.022   0.765     0.014
2018-2019    58513.5    151.5  58331-58696   0.69      0.011   0.721     0.012   0.668     0.005   0.709     0.012
2019-2020    58879      152    58696-59061   0.672     0.005   0.726     0.007   0.743     0.015   0.693     0.007

Total throughput (S-factors)

image

#### S factors
season       mjdmean    width  range_mjd      S[1]    err[1]    S[2]    err[2]    S[3]    err[3]    S[4]    err[4]
---------  ---------  -------  -----------  ------  --------  ------  --------  ------  --------  ------  --------
2011-2012    55957      152    55812-56101   0.991     0.048   1.001     0.044   0.998     0.033   1.023     0.039
2012-2013    56246      137    56101-56367   0.953     0.108   1.051     0.061   0.95      0.069   0.937     0.063
2013-2013    56489.5    137.5  56367-56588   0.911     0.069   1.033     0.053   0.827     0.061   0.872     0.068
2013-2014    56687.5     90.5  56588-56778   0.889     0.019   0.979     0.025   0.779     0.03    0.833     0.029
2014-2014    56870      122    56778-56968   0.789     0.042   0.879     0.049   0.716     0.036   0.827     0.044
2014-2015    57067.5    136.5  56968-57242   0.709     0.034   0.82      0.034   0.676     0.025   0.747     0.04
2015-2016    57418      152    57242-57600   0.707     0.023   0.764     0.017   0.657     0.028   0.788     0.025
2016-2017    57783.5    151.5  57600-57966   0.648     0.018   0.777     0.022   0.616     0.022   0.728     0.022
2017-2018    58148.5    151.5  57966-58331   0.676     0.02    0.748     0.028   0.652     0.033   0.716     0.022
2018-2019    58513.5    151.5  58331-58696   0.616     0.021   0.686     0.026   0.605     0.014   0.612     0.033
2019-2020    58879      152    58696-59061   0.622     0.01    0.721     0.01    0.708     0.021   0.662     0.058
mireianievas commented 4 years ago

After checking a bit the closest runs to those new subperiods, I get the following EPOCHs.

* EPOCH V4 0  46641
* EPOCH V5 46642 63372
* EPOCH V6_2012_2013 63373 67410
* EPOCH V6_2013_2013 67411 70170
* EPOCH V6_2013_2014 70171 73235
* EPOCH V6_2014_2014 73236 75021
* EPOCH V6_2014_2015 75022 78239
* EPOCH V6_2015_2016 78240 82587
* EPOCH V6_2016_2017 82588 86848
* EPOCH V6_2017_2018 86849 90608
* EPOCH V6_2018_2019 90609 93830
* EPOCH V6_2019_2020 93830 999999 

I will create a new v484 version with these changes and put some jobs in the queue.

epuesche commented 4 years ago

Regarding the TLCFG values, they should be the ones from this config file: https://veritas.sao.arizona.edu/wiki/images/f/f3/CARE_V6_Std.txt

GernotMaier commented 4 years ago

Yes - this is correct (cross checked it again with the care output at OSG)

GernotMaier commented 4 years ago

Mireia - does it need to be v484? We usually change version numbers only for code updates.

GernotMaier commented 4 years ago

Minor thing, but I would have choose a different naming of the epochs.

For now, we had

(major epoch)_(season) (e.g. V6_2012_2013)

Would it break everything if we keep the season, and simply add an alphanumerical value? V6_2014_2015b

The choice of V6_2014_2014 does not indicate if this is season 2014_2015 or 2013_2014 (and will break if we ever introduce more epochs)

mireianievas commented 4 years ago

Regarding the TLCFG values, they should be the ones from this config file: https://veritas.sao.arizona.edu/wiki/images/f/f3/CARE_V6_Std.txt

  • TLCFG 0 1 0 0 0.939 0.765 0.0
  • TLCFG 1 2 0 0 0.924 0.765 0.0
  • TLCFG 2 3 0 0 0.924 0.765 0.0
  • TLCFG 3 4 0 0 1.00 0.765 0.0

Hi @epuesche , is that 0.765 the Winston cone efficiency?. That is a quite different factor than the almost 1.0 I was using and definitely something would be wrong if I scale the reflectivities by them to calculate T-factors. How is that factor used in the simulations? I am trying to think if we need to include it or not ... in principle we have to just use the reference reflectivity curves and the ones we measure with the WDR, no? no need to include the Winston cone efficiency.

@GernotMaier the naming can be the one we choose, it does not have to stick to the 2014_2014 thing. I will modify it.

mireianievas commented 4 years ago

@GernotMaier No need to change the version, but since the IRFs are changing, I thought the Eventdisplay_AnalysisFiles should change the version, and in order for ED to pick it up it has to match the version. Am I wrong?

mireianievas commented 4 years ago

After dropping the small correction for Winston Cone efficiency I was doing (mainly T3), the T- and S- factors are

#### T factors
season        mjdav    width  range_mjd      T[1]    err[1]    T[2]    err[2]    T[3]    err[3]    T[4]    err[4]
----------  -------  -------  -----------  ------  --------  ------  --------  ------  --------  ------  --------
2011-2012   55957      152    55812-56101   1         0       1         0       1         0       1         0
2012-2013a  56246      137    56101-56367   0.978     0.05    1.011     0.05    0.953     0.05    0.918     0.05
2012-2013b  56489.5    137.5  56367-56588   0.949     0.05    1.013     0.05    0.891     0.05    0.848     0.05
2013-2014a  56687.5     90.5  56588-56778   0.919     0.004   0.99      0.005   0.835     0.005   0.838     0.007
2013-2014b  56870      122    56778-56968   0.86      0.042   0.926     0.046   0.804     0.025   0.827     0.035
2014-2015   57067.5    136.5  56968-57242   0.772     0.023   0.858     0.005   0.762     0.006   0.779     0.011
2015-2016   57418      152    57242-57600   0.773     0.011   0.814     0.005   0.727     0.007   0.801     0.007
2016-2017   57783.5    151.5  57600-57966   0.717     0.007   0.815     0.008   0.679     0.006   0.744     0.007
2017-2018   58148.5    151.5  57966-58331   0.706     0.008   0.781     0.011   0.706     0.022   0.727     0.013
2018-2019   58513.5    151.5  58331-58696   0.669     0.01    0.721     0.012   0.655     0.004   0.674     0.012
2019-2020   58879      152    58696-59061   0.652     0.005   0.726     0.007   0.728     0.015   0.658     0.007
#### S factors
season        mjdav    width  range_mjd      S[1]    err[1]    S[2]    err[2]    S[3]    err[3]    S[4]    err[4]
----------  -------  -------  -----------  ------  --------  ------  --------  ------  --------  ------  --------
2011-2012   55957      152    55812-56101   0.991     0.048   1.001     0.044   0.998     0.033   1.023     0.039
2012-2013a  56246      137    56101-56367   0.943     0.107   1.051     0.061   0.945     0.068   0.923     0.063
2012-2013b  56489.5    137.5  56367-56588   0.892     0.068   1.033     0.053   0.816     0.06    0.843     0.067
2013-2014a  56687.5     90.5  56588-56778   0.862     0.019   0.979     0.025   0.764     0.03    0.791     0.027
2013-2014b  56870      122    56778-56968   0.766     0.041   0.879     0.049   0.701     0.036   0.786     0.042
2014-2015   57067.5    136.5  56968-57242   0.688     0.033   0.82      0.034   0.662     0.024   0.71      0.038
2015-2016   57418      152    57242-57600   0.686     0.023   0.764     0.017   0.644     0.027   0.749     0.024
2016-2017   57783.5    151.5  57600-57966   0.628     0.017   0.777     0.022   0.603     0.022   0.691     0.021
2017-2018   58148.5    151.5  57966-58331   0.656     0.02    0.748     0.028   0.639     0.033   0.68      0.021
2018-2019   58513.5    151.5  58331-58696   0.597     0.02    0.686     0.026   0.593     0.014   0.581     0.031
2019-2020   58879      152    58696-59061   0.603     0.009   0.721     0.01    0.694     0.02    0.629     0.055

image

image

The contents that the MSCW runparameter file will have are

* T V6_2012_2013a 0.978 1.011 0.953 0.918
* T V6_2012_2013b 0.949 1.013 0.891 0.848
* T V6_2013_2014a 0.919 0.990 0.835 0.838
* T V6_2013_2014b 0.860 0.926 0.804 0.827
* T V6_2014_2015 0.772 0.858 0.762 0.779
* T V6_2015_2016 0.773 0.814 0.727 0.801
* T V6_2016_2017 0.717 0.815 0.679 0.744
* T V6_2017_2018 0.706 0.781 0.706 0.727
* T V6_2018_2019 0.669 0.721 0.655 0.674
* T V6_2019_2020 0.652 0.726 0.728 0.658

* G V6_2012_2013a 0.965 1.040 0.992 1.006
* G V6_2012_2013b 0.940 1.020 0.916 0.994
* G V6_2013_2014a 0.938 0.989 0.915 0.944
* G V6_2013_2014b 0.890 0.949 0.872 0.950
* G V6_2014_2015 0.891 0.957 0.869 0.911
* G V6_2015_2016 0.887 0.939 0.886 0.935
* G V6_2016_2017 0.877 0.954 0.889 0.929
* G V6_2017_2018 0.930 0.958 0.905 0.936
* G V6_2018_2019 0.893 0.951 0.905 0.862
* G V6_2019_2020 0.924 0.994 0.953 0.956

* s V6_2012_2013a 0.943 1.051 0.945 0.923
* s V6_2012_2013b 0.892 1.033 0.816 0.843
* s V6_2013_2014a 0.862 0.979 0.764 0.791
* s V6_2013_2014b 0.766 0.879 0.701 0.786
* s V6_2014_2015 0.688 0.820 0.662 0.710
* s V6_2015_2016 0.686 0.764 0.644 0.749
* s V6_2016_2017 0.628 0.777 0.603 0.691
* s V6_2017_2018 0.656 0.748 0.639 0.680
* s V6_2018_2019 0.597 0.686 0.593 0.581
* s V6_2019_2020 0.603 0.721 0.694 0.629
mireianievas commented 4 years ago

Merged the changes to v483 and deleted v484.

GernotMaier commented 4 years ago

Looks great, let's see what the effect is on the IRFs.

Are you going to start the production or should I?

mireianievas commented 4 years ago

I'm starting it.

epuesche commented 4 years ago

That's right, 0.765 is Winston cone efficiency. If we are going off the measured WDE reflectivity curves, it shouldn't be necessary.

mireianievas commented 4 years ago

g-factors from photostat gains (available here: https://www.hep.physics.mcgill.ca/~veritas/photostat/ )

Surprise: they are only available from 2014.

image

mireianievas commented 4 years ago

I found a few prototype measurements of WDR from D. Hanna in 2011-2012-2013 with just one camera and spectralon and measured one by one in the different telescopes. But the results he got (which I show in light pink once normalized to the average of the reference reflectivity curves at 440-460, band that he seems to have used) look a bit messy and not in good agreement with the final WDR setup.

image

To be investigated.

mireianievas commented 4 years ago

Putting together the gains from the photostat page (https://www.hep.physics.mcgill.ca/~veritas/photostat/, which start from 2014) with a file with absolute gains that Tony compiled, probably obtained with the same scripts, but covering from autumn 2012 to 2019, I get:

image

I also updated the gain reference for V6 (5.54 -> 5.73 dc/pe the reference value, the relative telescope factor remains 0.939, 0.924, 0.924, 1.000) following Elisa's file: https://veritas.sao.arizona.edu/wiki/images/f/f3/CARE_V6_Std.txt

As expected, these g-factors are much more close to the ones that Tony estimated since they also use photostat gains.

What is strange is the behavior in the beginning of 2019/2020 in T2/T3, the gains are really low and then they jump to normal values in early 2020.

image

And these the corresponding factors (very similar to the ones we were using so far, just with the finer binning in 2012/2013 and 2013/2014).

* T V6_2012_2013a 0.978 1.011 0.953 0.918
* T V6_2012_2013b 0.949 1.013 0.891 0.848
* T V6_2013_2014a 0.919 0.990 0.835 0.838
* T V6_2013_2014b 0.860 0.926 0.804 0.827
* T V6_2014_2015 0.772 0.858 0.762 0.779
* T V6_2015_2016 0.773 0.814 0.727 0.801
* T V6_2016_2017 0.717 0.815 0.679 0.744
* T V6_2017_2018 0.706 0.781 0.706 0.727
* T V6_2018_2019 0.669 0.721 0.655 0.674
* T V6_2019_2020 0.652 0.726 0.728 0.658

* G V6_2012_2013a 1.043 1.062 1.055 1.030
* G V6_2012_2013b 1.026 1.044 1.027 1.004
* G V6_2013_2014a 0.999 1.001 1.013 0.991
* G V6_2013_2014b 0.974 0.981 1.005 0.970
* G V6_2014_2015 0.955 0.990 0.990 0.949
* G V6_2015_2016 0.948 0.981 0.982 0.944
* G V6_2016_2017 0.942 0.983 0.986 0.957
* G V6_2017_2018 0.993 0.990 1.000 0.989
* G V6_2018_2019 0.940 0.972 0.978 0.975
* G V6_2019_2020 0.965 0.995 1.040 1.006

* s V6_2012_2013a 0.986 1.039 0.972 0.914
* s V6_2012_2013b 0.941 1.023 0.885 0.823
* s V6_2013_2014a 0.887 0.958 0.818 0.803
* s V6_2013_2014b 0.809 0.878 0.782 0.775
* s V6_2014_2015 0.713 0.821 0.730 0.715
* s V6_2015_2016 0.708 0.772 0.690 0.731
* s V6_2016_2017 0.653 0.774 0.647 0.688
* s V6_2017_2018 0.677 0.748 0.683 0.695
* s V6_2018_2019 0.608 0.678 0.619 0.635
* s V6_2019_2020 0.608 0.698 0.732 0.640

Now that I'm almost done with the IRFs for the ones with WDR + single PE gains I will do the MC/data comparison and generate some Crab spectra, but maybe these last values make more sense to implement since the collaboration seems to be more inclined to use photostat gains since they don't depend on the polya factors and the cadence is higher?

GernotMaier commented 4 years ago

Agree to use these values.

I can start in parallel a new production using them - we can then compare the results.

I've move the old v483 results obtained with the values from

https://github.com/VERITAS-Observatory/Eventdisplay_AnalysisFiles/commit/9c71b61d92d29ea9fa9fd4f76e56c6770b363056#diff-31ce91ba0d06fa849bd1711f61239edb

to v483.2020317

mireianievas commented 4 years ago

Sounds good to me.

GernotMaier commented 4 years ago

Submitted the newest values as commit https://github.com/VERITAS-Observatory/Eventdisplay_AnalysisFiles/commit/f7877f27b08aedead1abc8accdf14c264ffd1f15

mireianievas commented 4 years ago

I uploaded to the ED AnalysisFiles repository the python macro, data to reproduce this together with the plots.

https://github.com/VERITAS-Observatory/Eventdisplay_AnalysisFiles/commit/5e78cfc553a013de4a702abf38235ce18c84bfee

GernotMaier commented 4 years ago

Thanks for adding this - lots of stuff. Could you add README with some minimum text on how to run it, etc?

Let's wait for the ATM62 results to judge if we are happy with all the values.

mireianievas commented 4 years ago

Perfect. Yes, working on the documentation is on the list.

GernotMaier commented 4 years ago

Are the throughput values in

https://github.com/VERITAS-Observatory/Eventdisplay_AnalysisFiles/tree/v483/Calibration/TelescopeThroughput

the most recent ones? Or did you add them to v484?