Open alessandro-peron-sdg opened 1 year ago
Can you also post the code you used to do the memory profiling?
Sure, here it is:
rm(list=ls(all=TRUE))
library(tibble)
library(glue)
library(lubridate)
run_command <- function(filename) {
while (TRUE) {
output <- system("ps -u username--no-headers -o rss | awk '{sum+=$1} END {print (sum/1024/1024)}'", intern = TRUE)
output_df <- tibble(time = format(Sys.time(), "%Y-%m-%d %H:%M:%S"), output = output)
# Check if the file exists
if (file.exists(filename)) {
# Append the output to the existing CSV file
write.table(output_df, file = filename, append = TRUE, sep = ",", row.names = FALSE, col.names = FALSE)
} else {
# Create a new CSV file and write the output
write.table(output_df, file = filename, append = FALSE, sep = ",", row.names = FALSE, col.names = TRUE)
}
Sys.sleep(1)
}
}
run_command("output_file.csv")
@DavisVaughan any news on this?
I'm having the same issue.
Same for me as well. I also recently several gigabytes of temp files had been created and never cleaned up and parallelized functions do not complete as quickly as they used to. I've been using furrr for years with excellent performance, this is unusual. It feels like something else changed in the R ecosystem that's impacting furrr.
I may have to switch to Crew (powered by mirai). It's a shame because nothing comes close to furrr in terms of syntactic sugar and ease of use.
I am facing some issues parallelizing processes with
furrr::future_apply
.This is the setting I am having issues with:
When I profile memory and time for these 4 plans this is what I get:
I have launched 4 different jobs from R studio server, while I was profiling all memory used for processes with my user in a separate job to get data for the graph.
This is the outpu of my
sessionInfo())
of the parallelization jobs:Is this behavior normal? I did not expected the steep increase in memory for all the plans, other than the increase in time when I increase the number of workers.
I also tested the
sys.sleep(1)
function in parallel, and I got the result I expected, time decreases as I increase workers.What I am trying to parallelize is far more complex than this, i.e. a series of nested wrapped functions that do some training for some time series models and inference writing a csv and not returning anything.
I fill like I am losing something very simple but yet I cannot wrap my head around it, what concerns me the most is the memory increase, as it would be a very memory intensive function.