(Last updated Nov 2024, MiR, AO & AS)
The trigger
package contains the modules that make up the DUNE FD DAQ trigger system. Implementations of the physics algorithms that produce data selection objects (trigger primitive, trigger activity and trigger candidates) live in the tpglibs
and triggeralgs
packages. The configuration schema & code that generates trigger application live in appmodel
The main goal of the trigger is to extract information from data to form Trigger Decisions, which make the requests to save the raw data. Additionally, in conjunction with the DataFilter, datafilter
, the aim is to reduce the total data volume from the four planned DUNE modules to be in line with the DUNE requirements for trigger acceptance and efficiency for various physics signals.
Self-triggering: Identify high electronic signals indicating interesting physics activity in a channel and store it as a Trigger Primitive (TPG algorithms). Identify clusters of hits in a module (Trigger Activity), and merge clusters across modules (Trigger Candidate).
Handle multiple trigger sources, including TPC, PDS, calibration sources, beam information, and “external” information such as SNB notifications from adjacent DUNE modules or other experiments. Merging of readout windows for multiple coincident triggers, and explicit coincidence requirements (if desired) must be possible.
Provide triggers such as random or pulsed triggers, and support pre-scaling of all triggers (e.g. for low-threshold or high-rate triggers).
Have a latency that is small compared to the resident time of data in the Readout buffers.
Allow offline measurements of trigger efficiency and reconstruction of the decision path that went into generating a Trigger Record.
Provide the ability to measure any trigger-related dead time, and provide operational monitoring information about rates of different trigger types.
Trigger's position relative to DAQ:
As can be seen, the trigger lies in the heart of DUNE DAQ, receiving multiple inputs, creating and forwarding trigger decisions, while also responding to data requests.
The trigger is designed hierarchically. The minimal (extremely simplified) flow:
The hits-on-channel, in the form of Trigger Primitives, are merged at the scale of one TDAQ unit (such as an APA or CRP) into a cluster representing some type of Trigger Activity. Multiple Trigger Activity outputs can then be merged into a single Trigger Candidate that may include information spanning multiple TDAQ units or multiple slices of time (say, many APAs or many seconds) for a single system. Finally, multiple Trigger Candidates can be merged across systems (e.g. TPC and PDS) in the MLT into a Trigger Decision, a request to save the data.
There currently does not exist an up-to-date representation of the trigger system, but from the beginning of the v5 development:
The diagram above shows the DAQModules and connections in an example trigger and readout app. Blue rounded rectangles are the TriggerDataHandler DAQModules, red rectangles are the external trigger inputs, the orange rectangle represents the readout application, and the purple represents the ModuleLevelTrigger, handling the trigger candidates. Each DataHandler module (here called ReadoutModel) receives one type of data (e.g. TriggerActivity), runs pre-processing tasks on it, inserts it into a latency buffer (that is handled by the LatencyBufferHandler), and runs post-processing tasks generating a new object (e.g. TriggerCandidate).
Repository that holds the algorithms that generate the TriggerPrimitives out of the continously-supplied waveforms, from each channel. It is used by the readout application, and includes both the native and AVX implementations of the TPG algorithms.
The triggeralgs package contains implementations of the algorithms that produce TAs from TPs and TCs from TAs. They are in a separate package to allow the algorithms to be built outside of the DAQ software stack and used in the offline software, enabling a completely-independent development of the trigger algorithms. Trigger algorithms are easily loaded dynamically only knowing their name, through a factory method: AlgorithmPlugins.hpp.
The trigger data objects are defined as C++ structures, which could potentially be expressed in different ways in memory (eg on big-endian or little-endian systems), and do not have to be contiguous in memory. The trgdataformats repository contains the C++ structures that hold the trigger data formats, and various overlays that extend the trigger objects to be continous in memory to allow them to be stored in the buffers and saved as fragments.
The appmodel repository contains the configuration schema for all the trigger objects (and more), together with functions that generate the trigger application out of xml data that follows those schemas. This includes configuration of the trigger algorithms, setup of the external triggers, creating queues between different DAQModules
etc.
The instructions for running with real hardware change often. Please follow:
Offline live emulation currently not supported in v5
For trigger code development, please:
develop
branch. Create a branch (name should contain an easily recognisable name and reflect the purpose of the feature/fix, e.g. name/new_feature
.) and make a pull request to the appropriate branch. At least one reviewer is required (more for big changes). General rule of thumb: don't merge your own PR.minimal_system_quick_test
, fake_data_producer_test
and 3ru_3df_multirun_test
tests should be run (the more the better of course). Some tests require more powerful machines (available at np04).warnings
when building.dbt-clang-format.sh
. Run without arguments for usage information.