Closed jchavesmontero closed 9 months ago
@andreufont Let's focus on this PR now. Check out the new notebook when you have time, especially the last plot (see my comments above). I would also like feedback about the terminology to refer to Monte Python eBOSS mocks.
@schoeneberg , see the comment above. When we compare the errorbars from Chabanier et al. (2019) and from the "eBOSS mock" from the fiducial Nyx sim, the errorbars are quite different. Do you understand why? Maybe we are not reading the systematic contribution properly?
@jchavesmontero - to make the plot above I fixed the notebook to plot sqrt(variance)
since you were plotting the variance and not the errorbars.
Related to the naming, I think it is good to call these "eBOSS_mocks". I noticed that the file specifies _fid_
in it, but we might want to remove this so that the same python module can be used for other eBOSS mocks using the same data format, but from different Nyx boxes.
The rest looks good I think!
@schoeneberg , see the comment above. When we compare the errorbars from Chabanier et al. (2019) and from the "eBOSS mock" from the fiducial Nyx sim, the errorbars are quite different. Do you understand why? Maybe we are not reading the systematic contribution properly?
@schoeneberg we are plotting the diagonal elements of covariance matrices obtained from the inverse of the inverse covariance matrix from the file pk_1d_DR12_13bins_invCov.out. This is what you suggested, but I also took a look at the columns with diagonal errors from pk_1d_Nyx_emu_fiducial_mock.out and the discrepancy is even worse.
It addresses #52. You can find the comparison between the mock and Chabanier2019 in p1d_measurements/plot_fid_eBOSS_mock.ipynb. I am a bit worried about the errors though. Mock and observational errors agree at high z but differ at small z (check out last plot). Do you expect this trend @andreufont?