DUNE-DAQ / trigger

Trigger infrastructure of the DUNE DAQ
0 stars 6 forks source link

Concurrent Trigger Algorithms #202

Closed CharlieBatchelor closed 11 months ago

CharlieBatchelor commented 1 year ago

Overview & Goal

So far, we've managed to demonstrate self-triggering on physics events, with Trigger Primitives via an ongoing 'Activity' finder, called the TriggerActivityMakers. A small repository of them has also been made, including (of course) the HoriztonalMuon, which searches for specified adjacency, multiplicity or add_threshold values within single 2D 'views'. There is also the PlaneCoincidence algorithm, which has a window/view for each plane view of the detector, where we can combine the above trigger types in a 'mix and match' way to look for coincident activity across all three planes. We also have the DBSCAN, ADCSimpleWindow and Supernova activity finders, and so on.

It is currently only possible to run one of these activity finders at a time, during a DAQ run.

In the future, and hopefully for the upcoming PDHD2, we really want to be able to run any subset of our algorithms in parallel. That is, we'd like to search for tracks, Michel electrons, Supernovas concurrently. That's the goal of this issue.

One Possible Approach

If we inspect the current trigger system diagram for an example 2-link-system, we can see how the above might be achieved. With the insertion of a 'k-way' Tee-type module in-between the TPZipper and TAMaker(s) to copy TPSets to multiple TAMakers, then one might simply replicate the trigger flow from the TAMaker to the MLT. So, for each additional algorithm we want to run concurrently, we would have an additional:

whilst maintaining only one set of the following, per APA (or CRP):

Task List

CharlieBatchelor commented 1 year ago

Okay, so a first go at this seems to be working to some degree. Using the replay app, I can run two algorithms concurrently. I've made branches of trigger daqconf and triggeralgs called cbatchelor/concurrent_algorithms. Generating a TP Replay app configuration and pointing to some TP dataset with the following, I can issue TAs of multiple types within a run.

python -m trigger.replay_tp_to_chain -s 10 --trigger-activity-plugin "TriggerActivityMakerPlaneCoincidencePlugin" --trigger-activity-config 'dict(adjacency_threshold=100, window_length=10000, adj_tolerance=20, adc_threshold=13000)' --trigger-activity-plugin "TriggerActivityMakerHorizontalMuonPlugin" --trigger-activity-config 'dict(adjacency_threshold=120, window_length=10000, adj_tolerance=20, adc_threshold=1000)' --trigger-candidate-plugin TriggerCandidateMakerPlaneCoincidencePlugin --trigger-candidate-config 'dict(prescale=1)' --trigger-candidate-plugin TriggerCandidateMakerHorizontalMuonPlugin --trigger-candidate-config 'dict(prescale=1)' -l 1 --input-file $TPDATA/run_020472_tps_2seconds.txt json

The post run logs show multiple TA types being issued, and the corresponding TC types. Both TAMakers receive the same number of inputs 6807 as expected, and generate a different number of outputs, 904 and 939 perhaps also expected. There seems to be some discrepancy in the received objects from TCMakers received 941 inputs and successfully sent 22 outputs and received 887 inputs and successfully sent 15 outputs respectively. Finally, a hdf5_dump on the output hdf5 file shows 35 stored TRs. This is in contrast to the 22 + 15 outputs sent by the TCMakers. Could this be the MLT merging logic at work?

CharlieBatchelor commented 1 year ago

One a different run, the MLT provides a log confirming the merging of TCs to TDs:

2023-Apr-11 08:06:06,185 LOG [void dunedaq::trigger::ModuleLevelTrigger::send_trigger_decisions() at /dune/app/users/cbatchel/dunedaq_builds/concurrent_tamakers/sourcecode/trigger/plugins/ModuleLevelTrigger.cpp:362] Run 1: Received 37 TCs. Sent 35 TDs consisting of 36 TCs. 1 TDs (1 TCs) were created during pause, and 0 TDs (0 TCs) were inhibited. 0 TDs (0 TCs) were dropped. 0 TDs (0 TCs) were cleared.

jrklein commented 1 year ago

Charlie, Thanks! This is something that I have been worrying about for a while, I appreciate you taking the initiative and looking at it now! Would you be willing and able to discuss this on tomorrow’s DS/PP call? I realize this is probably your last one for a while…let me know if tomorrow is possible. I think there is a lot to discuss on this topic.

    Thanks,
       Josh

On Apr 11, 2023, at 5:51 AM, Charlie Batchelor @.***> wrote:

Overview & Goal

So far, we've managed to demonstrate self-triggering on physics events, with Trigger Primitives via an ongoing 'Activity' finder, called the TriggerActivityMakers. A small repository of them has also been made, including (of course) the HoriztonalMuon, which searches for specified adjacency, multiplicity or add_threshold values within single 2D 'views'. There is also the PlaneCoincidence algorithm, which has a window/view for each plane view of the detector, where we can combine the above trigger types in a 'mix and match' way to look for coincident activity across all three planes. We also have the DBSCAN, ADCSimpleWindow and Supernova activity finders, and so on.

It is currently only possible to run one of these activity finders at a time, during a DAQ run.

In the future, and hopefully for the upcoming PDHD2, we really want to be able to run any subset of our algorithms in parallel. That is, we'd like to search for tracks, Michel electrons, Supernovas concurrently. That's the goal of this issue.

One Possible Approach

If we inspect the current trigger system diagram for an example 2-link-system https://github.com/DUNE-DAQ/trigger/files/11198897/TriggerSystem2Links.pdf, we can see how the above might be achieved. With the insertion of a 'k-way' Tee-type module in-between the TPZipper and TAMaker(s) to copy TPSets to multiple TAMakers, then one might simply replicate the trigger flow from the TAMaker to the MLT. So, for each additional algorithm we want to run concurrently, we would have an additional:

TAMaker TASetTee TAZipper TCMaker TCSetTee whilst maintaining only one set of the following, per APA (or CRP):

TABuffer TCBuffer MLT Task List

Sketch this first approach out, using the TP replay app for testing. This will likely be the bulk of the work required here, since the replay app calls the actual get_trigger_app() in daqconf, which handles all the configuration generation and application connections. If I understand the system correctly so far, the above approach should work, and perhaps the best way to start this is by taking advantage of the existing Tee module to test for the 2-algorithm case. Confirm with the TP replay app that the desired behaviour is observed. Multiple TAMakers running, Multiple TCMakers running, MLT accepting and sending TDs of both trigger types, and those types are stored in the output swtest*.hdf5 files. Finally, extend to a full DAQ system, testing on some frames.bin type file that contains enough data to trigger on! — Reply to this email directly, view it on GitHub https://github.com/DUNE-DAQ/trigger/issues/202, or unsubscribe https://github.com/notifications/unsubscribe-auth/AALNRNABIG4M6AD23EAASVLXAUSRVANCNFSM6AAAAAAW2BD6GA. You are receiving this because you are subscribed to this thread.

CharlieBatchelor commented 1 year ago

The replay app testing looks good, with MLT merging logic working as expected. As it stands, the current implementation looks as in this graph: replay_app_1_link_2_algorithms.pdf

CharlieBatchelor commented 1 year ago

The last task here requires using a daqconf.json file to generate the configuration, rather than passing command line flags to the TPReplay app. I think the only new thing here should be working out how to pass lists of algorithms, rather than a single string, via the trigger configuration block in the daqconf.json dictionary. The altered get_trigger_app() in my branch should handle that list in the expected way at present for two concurrent algorithms only.

Future Implementation - 'k-way' Distributor? Maybe not...

If we do decide to do this type of TPSet copying and distribution to the various TAMakers we want to run, a new 'k-way' Distributor module would need to be constructed, which makes k copies of each incoming TPSet, sending one to each of the TAMakers, much in the spirit of the existing Tee module. However, I'm not sure that's the right way to go. We should probably avoid the k-way-copying of high frequency streams of TPSets, but copying of TAs and TCs really shouldn't cause us a problem downstream.

get_trigger_app() Tidy Up

The configuration generation for trigger is quite messy, and should be tidied up. It essentially does two things:

  1. For a given tp_config item passed to the function, it generates a list of Modules. For example, it does things like "For each link, create a TPChannelFilter module, TPBuffer module..." and so on.
  2. It then connects up all the modules in the list according to what we expect/want the system to run like.

Each step is itself done in quite a convoluted order, and it would make it more readable to generate the configuration in a "top to bottom" sense, based on the trigger flow graph diagrams, at least IMO. Start with the channel filters, to TPSetTees, to heartbeats and buffers, and so on.

MRiganSUSX commented 11 months ago

Done as part of daqconf PR394