DUNE-DAQ / trigger

Trigger infrastructure of the DUNE DAQ
0 stars 6 forks source link
dunedaq-common dunedaq-online dunedaq-team-trigger

Trigger

(Last updated July 2023, MiR + AS)

The trigger package contains the modules that make up the DUNE FD DAQ trigger system. Implementations of the physics algorithms that produce data selection objects (trigger activity and trigger candidates) live in the triggeralgs package. Also, the scripts that generate the trigger application configuration live in daqconf.


  The main goal of the trigger is to extract information from data to form Trigger Decisions. Additionally, in conjunction with the Data Filter, the aim is to reduce the total data volume from the four planned DUNE modules to be in line with the DUNE requirements for trigger acceptance and efficiency for various physics signals.

Table of content

  1. Overview
    1. More details (of trigger functionality)
  2. Structure
    1. Terminology
      1. Main objects
      2. Modules-Makers
      3. Modules-Other
  3. Flow
    1. Flow - detailed
  4. Links to other repositories
    1. Triggeralgs
    2. Trgdataformats
    3. Daqconf
  5. Running
    1. online
    2. offline
  6. Development
  7. Other

Overview

More details (of trigger functionality):

All in all, the trigger is not just about selecting data, but rather selecting data in real time, i.e. taking decisions continuously, within an agreed latency and with limited computing resources compared to what can be done offline.

Trigger's position relative to DAQ:

As can be seen, the trigger lies in the heart of DUNE DAQ, receiving multiple inputs, creating and forwarding trigger decisions, while also responding to data requests.

Structure

The trigger is designed hierarchically. The minimal (extremely simplified) flow:

  The channel-level information in the form of Trigger Primitives is merged locally (at the scale of one TDAQ unit, such as an APA or CRP) into a cluster representing some type of Trigger Activity. Multiple Trigger Activity outputs can then be merged into a single Trigger Candidate that may include information spanning multiple TDAQ units or multiple slices of time (say, many APAs or many seconds) for a single system. Finally, multiple Trigger Candidates can be merged across systems (e.g. TPC and PDS) in the MLT.

Terminology:

Main objects:

Modules-Makers:

Modules-Other:

Flow

A more realistic (still simplified) flow through the trigger application:

  The diagram above shows the DAQModules and connections in an example trigger app. Blue rounded rectangles are DAQModules in the app, while red rectangles are external inputs, blue rectangles are external outputs; and coloured ellipses are external connections producing requests. Each DAQModule is labelled by its plugin type, and edges show the type of data that flows along the corresponding queue. There are two main sets of inputs: trigger primitives - packaged as TPSets - from readout, seen at the top of the image; and HSI events from the hardware signals interface, seen towards the bottom. Eventually, each of these inputs are processed into Trigger Candidates (TCs), which are fed to the Module Level Trigger, which issues trigger requests to the data flow orchestrator.

Flow - detailed:

Following one TPSet stream from upstream to downstream:

  It's important to keep in mind that the set-up (configuration) can vary widely. We can have one or multiple sub-detectors (APAs / CRPs) AND we can have multiple makers (e.g. separate TAM for PDS and TPC systems).

The general numbers are:

Links to other repositories

Triggeralgs (link)

The triggeralgs package contains implementations of the algorithms that produce TAs from TPs and TCs from TAs. The idea of keeping these in their own package (instead of trigger) is to allow the algorithms to be used in the offline software to simulate the effect of the trigger system, where probably the only things they care about are efficiency at very low energies and whatever we do about not reading out the whole detector for every event, which will certainly involve not reading out all APAs/CRPs, and may also include more aggressive ROI finding. Keeping triggeralgs separate from trigger means that the former package doesn’t need to depend on anything from the DAQ software, and so could just be built as part of the offline software. Loading triggeralgs algorithms dynamically as plugins in the DAQ is done in a complicated way: triggeralgs doesn’t depend on any plugin library, so instead we manually make plugins in the trigger package for each of the algorithms in triggeralgs, like this: TAMakerPrescaleAlgorithm.cpp.

Trgdataformats (link)

The approach used by DUNE DAQ for saving fragments to disk is designed for raw data that comes off of hardware, which is always a stream of contiguous bytes, with a well-defined meaning for each byte. The objects in the trigger system, on the other hand, are defined as C++ structures, which could potentially be expressed in different ways in memory (eg on big-endian or little-endian systems), and are not in general contiguous (they might have pointers to data on the heap, as when you have a std::vector member). What this means for the trigger is that if we want to store trigger primitives/activities/candidates in fragments (for debugging, for understanding the trigger system, for passing to the data filter, or for offline analysis), we have to make a representation of the class that is contiguous in memory, with data at specified byte positions. This happens in trgdataformats using overlays, explained here: trigger data objects.

Daqconf (link)

The daqconf repository contains tools for generating DAQ system configurations. It generates DAQ system configurations with different characteristics based on the configuration file and command-line parameters given to it. This includes configuration for the trigger application. So every input that needs to be configurable is handled here and passed to the trigger app. Additionally, the links (input/output connections between modules) are also defined here.

Running

online

The instructions for running with real hardware change often. Please follow:

offline

Trigger replay app replay_tp_to_chain: A tool for running the trigger application in isolation from the rest of the DUNE DAQ apps. It accepts input file(s) that are simple .txt, containing TPs; and runs your configured algorithm over them. These could be real streamed TPs (from the cold-box for example) or simulated ones (LArSoft).

Few example files are available at /nfs/rscratch/trigger/tp_files at CERN:

Example command:

python -m trigger.replay_tp_to_chain -s 10 --trigger-activity-plugin TAMakerHorizontalMuonAlgorithm --trigger-activity-config ‘dict(adjacency_threshold=100, window_length=10000, adj_tolerance=20)‘ --trigger-candidate-plugin TCMakerHorizontalMuonAlgorithm --trigger-activity-config ‘dict(adjacency_threshold=100, window_length=10000, adj_tolerance=20)‘ -l 1 --input-file $TPDATA/tp_dataset_run020472_2s.txt json

Development

For trigger code development, please:

Other