Open schuemie opened 10 months ago
the points (say, e.g, [1,2,3,4]) are split in half and executed [2,1] and [3,4] via warm-starting to promote numerical stability and decrease total # of cycles. this of course is not strictly necessary and we can introduce a maximizeParallelization
flag that just runs all [1,2,3,4] in one parallel batch. i can work on this.
also given the >100,000 strata, best to exclude (if possible) the non-informative one.
Thanks! The non-informative strata have already been removed.
I'm hereby including some simulation code (using the SelfControlledCaseSeries package) that reproduces the issue:
# Simulate data ----------------------------------------------------------------
library(SelfControlledCaseSeries)
settings <- createSccsSimulationSettings(includeAgeEffect = FALSE,
includeCalendarTimeEffect = TRUE,
includeSeasonality = TRUE)
set.seed(123)
sccsData <- simulateSccsData(10000, settings)
seasonalitySettings <- createSeasonalityCovariateSettings()
calendarTimeSettings <- createCalendarTimeCovariateSettings()
covarSettings <- createEraCovariateSettings(label = "Exposure of interest",
includeEraIds = c(1, 2),
stratifyById = TRUE,
start = 0,
end = 0,
endAnchor = "era end")
studyPopulation <- createStudyPopulation(sccsData = sccsData,
outcomeId = 10,
firstOutcomeOnly = FALSE,
naivePeriod = 0)
sccsIntervalData <- createSccsIntervalData(
studyPopulation = studyPopulation,
sccsData = sccsData,
eraCovariateSettings = covarSettings,
seasonalityCovariateSettings = seasonalitySettings,
calendarTimeCovariateSettings = calendarTimeSettings
)
# Save data so we can re-use:
saveRDS(collect(sccsIntervalData$outcomes), "~/Data/temp/outcomes.rds")
saveRDS(collect(sccsIntervalData$covariates), "~/Data/temp/covariates.rds")
# Fit model --------------------------------------------------------------------
library(Cyclops)
outcomes <- readRDS("~/Data/temp/outcomes.rds")
covariates <- readRDS("~/Data/temp/covariates.rds")
cyclopsData <- Cyclops::convertToCyclopsData(outcomes,
covariates,
modelType = "cpr",
addIntercept = FALSE,
checkRowIds = FALSE,
quiet = TRUE)
system.time(
fit <- Cyclops::fitCyclopsModel(cyclopsData,
control = createControl(threads = 10))
)
# user system elapsed
# 8.524 0.190 2.755
# Compute likelihood profile
system.time(
logLikelihoodProfile <- getCyclopsProfileLogLikelihood(
object = fit,
parm = 1000,
bounds = c(log(0.1), log(10)),
tolerance = 0.1
)
)
# user system elapsed
# 371.697 0.847 58.086
In the SelfControlledCaseSeries package we fit conditional Poisson regressions, with >100,000 strata and 20 covariates, of which 18 (representing season and calendar time) can be regularized. We need a likelihood profile for 1 covariate (the treatment effect).
Currently, after data preparation, 20% of time is spent on cross-validation + model fitting, while 80% is spent on computing the likelihood profile (using an adaptive grid). It would be nice if we could make the profiling faster.
The adaptive grid function proposes about 10 grid points at a time, which are fed into
fixedGridProfileLogLikelihood()
. In a typical setting I specify 10 threads, but because the set of grid points is split in two, only 5 threads are used at most.An easy win would be to run all 10 gridpoints in parallel, increasing speed by about two. What is the reason for splitting the set of grid points?