Current codebase computes observed distribution profiles (dp) first, which can then be smoothed respecting their monotonicity. Concentration profiles (cp) are always computed taking finite differences of dps, i.e. smooth cps are computed only by reconstructing the dp, smoothing it, and taking finite differences, which is time consuming.
Refactor the workflow in one of the following ways:
1) first compute directly smooth cp (unsmoothed does not really make much sense), e.g. using density estimates and then integrate that to get the smooth dp. Keep observed dp computation but make sure session durations match
2) implement a faster monotone smoother for dps.
Major consideration is whether dp value at zero should always match session duration for all variables or not.
Current codebase computes observed distribution profiles (dp) first, which can then be smoothed respecting their monotonicity. Concentration profiles (cp) are always computed taking finite differences of dps, i.e. smooth cps are computed only by reconstructing the dp, smoothing it, and taking finite differences, which is time consuming.
Refactor the workflow in one of the following ways: 1) first compute directly smooth cp (unsmoothed does not really make much sense), e.g. using density estimates and then integrate that to get the smooth dp. Keep observed dp computation but make sure session durations match 2) implement a faster monotone smoother for dps.
Major consideration is whether dp value at zero should always match session duration for all variables or not.