There should be simple approximate models for the ctlike runtime.
Unbinned: t = t(n_threads, n_obs, n_events)
Binned: t = t(n_threads, n_obs, n_bins)
E.g. a model for the unbinned case could be
t = A + B * (n_obs / n_threads) + C * (n_events / n_threads)
... or not ... we noticed that the unbinned ctlike runtime was the same in this case:
n_threads = 3, n_obs = 100, n_events = 1700k
n_threads = 3, n_obs = 100, n_events = 36k
i.e. a factor of 50 in the number of events didn't matter.
In detail the runtime will of course also depend e.g. on the model and model parameter start values, but there should be regimes with simple runtime scaling behaviours.
@jknodlseder We will measure this of course, but do you have a better model than this of what ctlike does as far as performance is concerned and maybe a formula for how you expect runtime to scale?
There should be simple approximate models for the
ctlike
runtime.t = t(n_threads, n_obs, n_events)
t = t(n_threads, n_obs, n_bins)
E.g. a model for the unbinned case could be
... or not ... we noticed that the unbinned ctlike runtime was the same in this case:
In detail the runtime will of course also depend e.g. on the model and model parameter start values, but there should be regimes with simple runtime scaling behaviours.