When performing feature extraction on around 1000+ cohorts, storing the aggregate features in andromeda objects on disk becomes more difficult with the size of the features.
It also looks like the code for getDbDefaultCovariateData doesn't function correctly for the non-aggreagate features as there is a discrepancy between the executed SQL statements here and here.
To store results in a new table will also probably break in the case where the tables already exist, so and overwriteTables flag should be included.
Currently it appears that the use of an RDBMS is supported [for storing computed features] in
getDbDefaultCovariateData
(https://github.com/OHDSI/FeatureExtraction/blob/bddb9ca9ce946a540b04e7bfa0a2465344b7b249/R/GetDefaultCovariates.R#L152), however it appears that this is not possible for aggregate features.When performing feature extraction on around 1000+ cohorts, storing the aggregate features in andromeda objects on disk becomes more difficult with the size of the features.
It also looks like the code for
getDbDefaultCovariateData
doesn't function correctly for the non-aggreagate features as there is a discrepancy between the executed SQL statements here and here.To store results in a new table will also probably break in the case where the tables already exist, so and
overwriteTables
flag should be included.