The mount interface has a method onBatchNotify which was not yet implemented. Batching will work with policy.sampleRate, where for each period a batch of updated objects is delivered to the mount.
Objects are delivered as an iterator. This enables a mount to abstract away from how the batch is stored internally, and do more outlandish things as offloading events to disk, and load samples one-by-one when onBatchNotify is called (future feature).
In addition to onBatchNotify, another callback called onHistoryBatchNotify will be added to the mount interface. Where onBatchNotify only provides the last values of updated objects, onHistoryBatchNotify keeps track of the full object history.
To ensure that memory is not exhausted, the mount should offer an option to specify a max queuesize in the mount policy. If the number of stored events exceeds this size, the mount should throttle down the application. Throttling should be done in a way that the time for a corto_update/ corto_publish remains relatively constant.
The mount interface has a method
onBatchNotify
which was not yet implemented. Batching will work withpolicy.sampleRate
, where for each period a batch of updated objects is delivered to the mount.Objects are delivered as an iterator. This enables a mount to abstract away from how the batch is stored internally, and do more outlandish things as offloading events to disk, and load samples one-by-one when
onBatchNotify
is called (future feature).In addition to
onBatchNotify
, another callback calledonHistoryBatchNotify
will be added to the mount interface. WhereonBatchNotify
only provides the last values of updated objects,onHistoryBatchNotify
keeps track of the full object history.To ensure that memory is not exhausted, the mount should offer an option to specify a max queuesize in the mount policy. If the number of stored events exceeds this size, the mount should throttle down the application. Throttling should be done in a way that the time for a
corto_update
/corto_publish
remains relatively constant.