Closed jreps closed 2 years ago
Here is the error report:
DBMS: redshift
Error: java.lang.OutOfMemoryError: Java heap space
SQL: SELECT * FROM ( SELECT row_id, covariate_id, covariate_value FROM #cov_1 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_2 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_3 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_4 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_5 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_6 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_7 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_8 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_9 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_10 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_11 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_12 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_13 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_14 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_15 UNION ALL SELECT row_id, covariate_id, covariate_value FROM #cov_16 ) all_covariates;
R version: R version 4.0.5 (2021-03-31)
Platform: x86_64-w64-mingw32
Attached base packages:
Other attached packages:
One approach might be to load up a Java profiler (I use, e.g., IntelliJ) and attach it to the JVM that R kicks off. You should then be able to watch in real-time the construction / destruction of Java objects along with the calling code line #s.
Alternatively, you might try increasing the heap-size for your JVM. From the command-line -X*
options are useful. I am not sure how to specify these from R's .onLoad
function.
I don't want people increasing the heap size, as
I've tried to debug this issue a few weeks ago, but was stuck because the problem only occurs after running a very lengthy script. (several days). If you then restart the script to pick up where it left of it continues without error. Somewhere something doesn't clean up after itself in Java heap space. Setting up a profiler is a bit tricky since it would need to run in R, where all the many steps needed to reproduce the problem are initiated.
Since Andromeda doesn't run in Java, I don't think this is an Andromeda issue.
I keep getting the java heap error (Jill had a similar error as well) when extracting the data into an sqlite database. Jill had the error when extracting a single big cohort. I get the error after running a few cohort extracts, it stops for a bit after I terminate and restart R but then reappears. Is there a memory leak somewhere?
Connecting using Redshift driver
Constructing the at risk cohort |============================================================================================================| 100% Executing SQL took 1.46 secs Fetching cohorts from server Loading cohorts took 24.6 secs Sending temp tables to server Constructing features on server |============================================================================================================| 100% Executing SQL took 1.34 mins Fetching data from server Error: Error executing SQL: java.lang.OutOfMemoryError: Java heap space An error report has been created at C:/Users/admin_jreps/Documents/errorReportSql.txt Run
rlang::last_error()
to see where the error occurred.