JeffersonLab / hcana

Hall C++ Analyzer
7 stars 118 forks source link

Podd 1.7 rc3 #469

Closed hansenjo closed 3 years ago

hansenjo commented 3 years ago

Update to Podd 1.7.0-rc3

MarkKJones commented 3 years ago

I see message like Big ADC Trigger Time Shift, ROC 2 14 1309949109

which I had not see before. I will have to look into this.

hansenjo commented 3 years ago

That could very well be related to the switch from Int_t to UInt_t in the decoder API. Most of the raw data info really is unsigned. The trouble happens when people take differences of these. The current code is rather sloppy in that regard.

I'd be happy to run it through the debugger if I could have your replay setup as a test case.

hansenjo commented 3 years ago

As a quick test, try this: Go to line 221 of THcHitList.cxx and change Int_t titime to UInt_t titime. See if the message about the big time shift goes away and let me know.

MarkKJones commented 3 years ago

The problem ends up being that for some events , the code is not getting a trigger time from the TI. The code did not check if the TI had the trigger time data by calling GetNumHits before calling GetData. So when there was no hits GetData returned the Max integer value. Checking into the root cause of the TI not having data.

MarkKJones commented 3 years ago

Uncovered that in Decoder.h the MAXBANK = 4095 when it should be 65365 since the Bank ID is 16 bits. Hall C inserted the FADC scaler data into the readout of the ROC with the FADC as BANK ID = 9250 every 2 sec. In CodaDecoder::bank_decode if the bank id > MAXBANK , it silently sets an error flag and does not analyze any data in that ROC. So this meant the Hall C was not analyzing the ROCs with FADC for those events which effectively means the data was not analyzed. Should change the MAXBANK. to 65365 and maybe write a message to "cerr" . I have modified THcHitList.cxx so that it checks if the TI has data before doing a GetData call and prints an error message. I pushed this changes to hansonjo/hcana podd-1.7-rc3 . I assume that Ole can updated the Decoder.h to update podd.

hansenjo commented 3 years ago

Hi Mark,

thanks for the debugging. It looks like my simple switch to UInt_t uncovered a hidden bug. I've been saying for a long time that silently returning "safe" values, like 0, in case of some out-of-range condition or other unexpected situation is a mistake since it can lead to analysis errors. Here we go. Fortunately, I think I've patched most of those shortcuts by now.

Yes, I can raise MAXBANK to 64k, but it's time to think of a more efficient solution for the bankdat array because obviously almost all of that space is going to be wasted. It's easy to do and it's been on my list already, so I'll do it shortly.

One question: What are the units for the timestamps?

Also, which run are you analyzing? I'd like to set up a reference Hall C replay. If you could point me to the replay script and database you're using, that would help a great deal.

Ole

MarkKJones commented 3 years ago

The time stamp is 4ns. Steve Wood mentioned using hash table for the bankdat. I will send you info about the Hall C replay directory so we can have a reference. I agree that is a good idea.

hansenjo commented 3 years ago

Yes, I agree, I think a hash table will be part of the more efficient solution for the bank data array.

Update: It turns out that a typical setup only has a few dozen banks. Using a hash table in such a case is not efficient. A simple std::vector is usually fastest, which is the solution I implemented.

hansenjo commented 3 years ago

I've updated this pull request to advance Podd to version 1.7.0-rc4, which includes a fix for the maximum bank number (MAXBANK=65535).

@MarkKJones: could you please test this with your hcana replay and let me know if it works.

hansenjo commented 3 years ago

This PR is now basically obsoleted by #470, unless you prefer to stay at Podd 1.7.0-rc4 for the time being.