quinngroup / dr1dl-pyspark

Dictionary Learning in PySpark
Apache License 2.0
1 stars 1 forks source link

Heap and memory issues @GACRC #62

Closed milad181 closed 8 years ago

milad181 commented 8 years ago

@magsol Please follow the log files as well as submission commands and conf files that were used: Those marked with GACRC and the one with sub are related to sapelo and the rest for the local server.

Logfileandsubmitcommand.txt spark-defaults.txt log4j.properties.txt GACRC.log4j.properties.txt GACRC.spark-defaults.txt sub_4tasks10_new.sh.txt GACRC.Spark_4tasks10new_124g_48c.txt

magsol commented 8 years ago

This is for all of @quinngroup/bigneuron --

@milad181 posted this out of memory (OOM) issue a few days ago. According to the logs (specifically GACRC.Spark_4tasks10new_124g_48c.txt), it fails on the very first distributed action: zipWithIndex. Offhand, it looks like Spark is attempting to load too many partitions into memory at once. Without more details it's hard to know for sure.

More broadly, there are a bunch of tickets that still need fixing and may contribute to issues down the road. In particular, issues #52, #53, and #58 are easy fixes that will make things much easier. Please have a look at them and see if you can submit fixes; 53 in particular is a very nasty bug that may crash any analysis we do down the road. 58 will help control the number of partitions by making it a command-line parameter, potentially fixing memory issues.

I'm still looking into things, but these issues need to be fixed ASAP.

milad181 commented 8 years ago

Thank you @magsol .

We are currently working on MICCAI write-ups and afterward we will work to fix the current issues. @LindberghLi, would you look at the bugs specially #53 to have more sense of issues?

magsol commented 8 years ago

@quinngroup/bigneuron I have a local cluster that is correctly configured to run this job, if you would prefer to do your testing there instead.

(I was only yesterday given instructions for how to create custom images, and now have an image of Spark 1.6 with Python 3.5)

magsol commented 8 years ago

Internal testing suggests the memory errors were due to incorrectly configured executors. Setting

spark.executor.memory=14g

within the configuration did the trick.