star-bnl / star-sw

Core software for STAR experiment
26 stars 63 forks source link

Change TPX gain loading interface from fTpx->put(... to fTpx->gain_al… #417

Closed fisyak closed 1 year ago

fisyak commented 1 year ago

…go(..., Clean ups, add prototypes for (i)TPC23

starsdong commented 1 year ago

I attach Yuri's comments regarding this PR here

two problem which I have mentioned during the meeting I have summarized in my post https://drupal.star.bnl.gov/STAR/system/files/FCF_online_offline_Update.pdf. There are two modifications which are required to solve it:

  1. One I have posted as PR#417.
  2. The second modification require an action from Tonko or Jeff.
starsdong commented 1 year ago

Some test results from Xianglei:

Hi Yuri,

I have produced a new offline clustering sample with modified TpcHitMaker and RTS for Run19 AuAu19.6, The data is here, /star/u/starembd/zhux/embd/prod.auau19/output.tpxclu.newrts The good news is now offline cluster matches perfectly with the online cluster in track nHits distribution, see the attached comparison plots. I think this cluster finder issue is resolved in my point of view, although need your confirmation from TbyT comparisons. Thanks a lot for you and Tommy for the efforts ! Could you propagate the update to SL21c ? I will move forward to a Run19 AuAu19 embedding test productions with the new library then.

Also thank you for the fix for SL19b, I will try it sometime next week and let you know the results for Run18.

starsdong commented 1 year ago

Hi Yuri, thank you very much for helping resolve this issue and thank Xianglei for the encouraging test results. I added a few experts to the reviewer list to help comment and merge this PR asap. I have a few questions to the PR first and also to the integration plan. 1) Yuri, in the PR, there is one new file StTpcHitMaker.flow which seems to be some log file. Do we really need this file? 2) The function from_file(...) to use gain from local file, is this added for local test purpose? In the production, is this used or gains are always loaded from DB? 3) My understanding is that this TpcHitMaker, as the offline cluster finder, was not used in data production, but only needed for embedding production. Correct? Regarding the patch to the previous libs, we shall propagate this to the previous libs and label them as SLXX_embed? 4) Yuri, you mentioned the changes need to RTS. This seems to be the backward compatibility issue. Do we need them for the embed libs for previous datasets?

starsdong commented 1 year ago

Hi Yuri, as far as I see, the changes in StTpcHitMaker in this PR don't seem to be relevant to the offline/online cluster finder change. If so, I would like to request you remove the change in the StTpcHitMaker from this PR. And let us move this PR forward. Thanks

starsdong commented 1 year ago

Two points to clarify before we merge. 1) The relevant piece of codes in the StTpcHitMaker was already not effective in the current main branch. The original commits Gene introduced was for speed-up (see Gene's comment above). This PR does no effective change to production except for removing the commented codes. One may later on consider the relevant piece of codes in the StTpcHitMaker, but will be dealt separately. 2) I am hoping the embedding team and Xianglei can provide the standard MC/data comparison figures through the standard embedding chain to demonstrate the effectiveness of this change and conclude on the issue in offline/online cluster finder. Then we can safely merge this PR into the main, and also other relevant old libraries.

zhux97 commented 1 year ago

Hi all, I have finished the data/MC comparison with the StTpcHitMaker in this PR, along with the RTS in PR#429. For Run19 AuAu 19.6 GeV, proton embedding (SL21c default): https://www.star.bnl.gov/protected/embedding/pr417test/auau19_proton_nhits_default.pdf For Run19 AuAu 19.6 GeV, proton embedding (SL21c+PR#417+PR#419): https://www.star.bnl.gov/protected/embedding/pr417test/auau19_proton_nhits_pr417rts.pdf For Run18 AuAu 27 GeV, pion embedding (SL19b default): https://www.star.bnl.gov/protected/embedding/pr417test/auau27_pion_nhits_default.pdf For Run18 AuAu 27 GeV, pion embedding (SL19b+PR#417+PR#419): https://www.star.bnl.gov/protected/embedding/pr417test/auau27_pion_nhits_pr417rts.pdf In all the above plots, the red is MC, the blue is production real data, the black is the real data in embedding chain. You can see the significant improvement (at low pT) for both the MC and real data tracks in embedding chain with PR#417, #419, though the MC still needs a bit more tuning for a better match with real data.

starsdong commented 1 year ago

Hi Xianglei, Thanks for the great news! The comparison looks very promising. I am wondering about your comment in the last sentence. What I can see is that overall the MC/data distributions match quite well, particularly the significant improvement for low pT as you pointed out. There may be a slightly shift between MC/data in the nHits peak for high pT tracks. I have two comments/questions here 1) How were proton tracks in the real data selected? Will the mis-PID limit the comparisons, particularly when moving to higher pT? 2) Even with such a slight mis-match, the efficiency uncertainty could be still in good control since often our nHits cut in the analysis is not strictly high. The distribution has a small tail at nHits less than 37 (for iTPC). Maybe for further high eta tracks, one may need to pay attention (so I think for BES-II, we should consider include comparisons for tracks with eta up to 1.5 or even 2.0). But I see for tracks going through the barrel, we should have a reasonable control on the efficiency. So I would advocate that we should proceed with the current setup and move on with the embedding production. Certainly, I would like to ask Rongrong to pull the opinions from PWGCs and have the final green light to move on.

zhux97 commented 1 year ago

Hi Xin, the remaining slight mismatch is not relevant to this pull request, it is related to the fine tuning of TpcRS parameters, as we practiced a lot at the beginning of this year. After updating the cluster finder, we need to revisit this a bit. Yes, this effect should be small. But for the final BES-II embedding production, we need at least a SL21c_embed with the updates in this PR and in PR419.

On Nov 6, 2022, at 07:22, Xin Dong @.***> wrote:

Hi Xianglei, Thanks for the great news! The comparison looks very promising. I am wondering about your comment in the last sentence. What I can see is that overall the MC/data distributions match quite well, particularly the significant improvement for low pT as you pointed out. There may be a slightly shift between MC/data in the nHits peak for high pT tracks. I have two comments/questions here

• How were proton tracks in the real data selected? Will the mis-PID limit the comparisons, particularly when moving to higher pT? • Even with such a slight mis-match, the efficiency uncertainty could be still in good control since often our nHits cut in the analysis is not strictly high. The distribution has a small tail at nHits less than 37 (for iTPC). Maybe for further high eta tracks, one may need to pay attention (so I think for BES-II, we should consider include comparisons for tracks with eta up to 1.5 or even 2.0). But I see for tracks going through the barrel, we should have a reasonable control on the efficiency. So I would advocate that we should proceed with the current setup and move on with the embedding production. Certainly, I would like to ask Rongrong to pull the opinions from PWGCs and have the final green light to move on. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because your review was requested.

marrbnl commented 1 year ago

Do you plan to retune TpcRS to improve the small mismatch? If so, what would be the timescale? Thanks.

zhux97 commented 1 year ago

Yes, it will take 1-2 weeks, since several iterations of test production are needed.

On Nov 6, 2022, at 22:27, Rongrong Ma @.***> wrote:

Do you plan to retune TpcRS to improve the small mismatch? If so, what would be the timescale? Thanks.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because your review was requested.

starsdong commented 1 year ago

Hi Xianglei,

Certainly, I totally agree that we should merge this PR in and prepare for the BES-II production. Could you please approve (if you agree) this PR asap so we can proceed?

Regarding the further tuning, I understand this has been regularly done before. I re-looked at the comparison figures, indeed, some slightly tuning may further improve the MC/data agreement, not only just high pT. I just have a couple of comments 1) We need more statistics compared to your test production to better justify the agreement, especially regarding the fine-tuning. 2) We should keep in mind that the sample selected in real data may have limitation due to PID. 3) Could you please also include the DCA comparisons as done in the standard QA? And I think we may also look at the pion samples for completeness.

If this needs a couple of more weeks, I would recommend we wait for your final tuning results which is expected to yield better MC/data agreement. Could you and the embedding team focus on maybe the Run19 dataset tuning first? So we can start the production soon and later on we can work on tuning for other datasets.

Thanks /xin

zhux97 commented 1 year ago

Hi Xin,

On Nov 8, 2022, at 00:58, Xin Dong @.***> wrote:

Hi Xianglei,

Certainly, I totally agree that we should merge this PR in and prepare for the BES-II production. Could you please approve (if you agree) this PR asap so we can proceed?

Sure, but I have not done code review before. To be honest, I cannot review all the changes in the code due to the lack of expertise. But I will approve it shortly, as a user of the code (I hope this is also reasonable).

Regarding the further tuning, I understand this has been regularly done before. I re-looked at the comparison figures, indeed, some slightly tuning may further improve the MC/data agreement, not only just high pT. I just have a couple of comments

• We need more statistics compared to your test production to better justify the agreement, especially regarding the fine-tuning. • We should keep in mind that the sample selected in real data may have limitation due to PID. • Could you please also include the DCA comparisons as done in the standard QA? And I think we may also look at the pion samples for completeness. If this needs a couple of more weeks, I would recommend we wait for your final tuning results which is expected to yield better MC/data agreement. Could you and the embedding team focus on maybe the Run19 dataset tuning first? So we can start the production soon and later on we can work on tuning for other datasets.

Please find more data/MC comparisons with a large embedding test sample https://www.star.bnl.gov/protected/embedding/AutoIndex.php?dir=pr417test/forXin/ For PID, please look at realmass2.pdf. For DCA, rcmcngdca.pdf, keeping in mind that there are contamination in real data (due to Lambda decay).

I am working on 19.6 GeV right now. For a quick tuning, I need to focus on this. Thanks for the patience!

Xianglei

Thanks /xin

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because your review was requested.