Open AlexSiormpas opened 4 years ago
@dfalbel Thanks for the pointer. So in terms of implementation steps would it be like :
Would parallel computation help me here if I also replace the basic
for(i in 1:400){...}
with foreach(i = 1:400) %dopar% {...}
I want to train ~400 small Neural Networks (3Layer, 5 neurons each) over a data set. I've constructed a loop to achieve this and inserted a save function at the end of the loop to save R data at the end of each step. At the early steps of the loop, things run smoothly but as loop progress, RAM allocation is increasing and the speed of my laptop is starting to slow down. Currently, it takes ~18 hours to finish the full process end-to-end. Is there anything I can do to clean up memory so I can make the process run smoother?