Currently we intend to pass a batch of N muons through the detector, infer X0 predictions per passive voxel (replacing missing predictions with default prediction), and then update the detector based on the loss. Unless N is high enough, the uncertainty on predictions per voxel can be large due to resolution and the predictions can be inaccurate due to the random term (see #13).
An alternative approach would be to repeatedly pass batches of muons through the detector until the uncertainty on predictions are below a specified threshold. Then penalise loss based on the number of batches required to achieve acceptable precision. Assuming a constant flux of muons, requiring more muons = a longer imaging time, so this may then also allow us to develop detectors for time-sensitive application (e.g. preventing queues at security checkpoints).
Idea description
Currently we intend to pass a batch of N muons through the detector, infer X0 predictions per passive voxel (replacing missing predictions with default prediction), and then update the detector based on the loss. Unless N is high enough, the uncertainty on predictions per voxel can be large due to resolution and the predictions can be inaccurate due to the random term (see #13). An alternative approach would be to repeatedly pass batches of muons through the detector until the uncertainty on predictions are below a specified threshold. Then penalise loss based on the number of batches required to achieve acceptable precision. Assuming a constant flux of muons, requiring more muons = a longer imaging time, so this may then also allow us to develop detectors for time-sensitive application (e.g. preventing queues at security checkpoints).