Open tomeichlersmith opened 2 years ago
Using Erik's correlation script as a starting point.
The main issue I'm encountering is determining the spill that TS is apart of. The correlation script pulls the WR and TS timestamps into memory and then performs an alignment where the first O(10) WR events are skipped and then the TS timestamps are compared to WR timestamps and assigned based on which spills have highest number of matches. Spills in TS are separated based on a deprecated flag written to the raw data txt file.
I've tried two methods for separating TS timestamps.
For the WR, I simply check if the "channel" being sent in the event packet is the "Start of Spill" channel. If it is, I start a new spill. In the correlation script, Erik checks that the time difference between starts of spills is at least 5s to avoid the occasional happenstance where two spill signals are sent at once.
Testing these methods on the raw data from Run 203, I get a wide variety of number of spills.
Subsystem | Method | Events | Spills |
---|---|---|---|
WR | 152778 | 62 | |
TS | 1 | 28545 | 594 |
TS | 2 | 28545 | 348 |
The elog mentions that run 203 was used for testing the monitoring and calibration, so my next step is to attempt another run: Run 205.
Subsystem | Method | Events | Spills |
---|---|---|---|
WR | 702789 | 270 | |
TS | 1 | 131689 | 2705 |
TS | 2 | 131689 | 1688 |
I use the decode_2fibers_to_RAW_fromBin.py
script to convert the eudaq data file into a file more easily understood.
What's puzzling to me is that the WR has more events and less spills. If it had more of both, it could easily be handled by skipping spills that are only in the WR.
I think method one is the right choice. Looks like from the elog the Cherenkov's were used in the trigger for TS. So, I think you should expect fewer events for TS. I can't say why the WR seems to be seeing 1/10 spills, though. The rate of events for TS (using method 1) is only ~50 events per spill, this seems too low to me. Where is the code that extracts the time stamp from the data. If this is truncating the most significant bits from the TS timestamp, you would see high spill counts.
I shift four bytes from the event into a 32-bit timestamp:
I tried doing the opposite order of bytes, but that produced pseudo-random timestamps so I went with this method.
would it help to use the old format with the UTC timestamp still in the header, to see if your method of identifying when you should increment the spill count makes sense? i think that O(1s) precision would be enough to tell you that, right?
Yes, the spills were at minimum 30s apart, so that would be wonderful
Edit: Do you have a run number I can look at? I have a copy of everything that was in /u1/ldmx/data/
and I can find other stuff at SLAC.
ok so what do you need, a .txt file, or .root? i had just implemented putting the UTC time stamp into the ldmx-sw format when it was taken out so i might need to rerun. otherwise you can get any older .txt file from
/nfs/slac/g/ldmx/TS-data/test_stand_data/raw/
from say, some time April 2-6.
I can make do with a text file easy enough :) thank you
Use the reformat infrastructure to develop alignment for these "fast" subsystems.