Closed porterjamesj closed 10 years ago
more info; apparently OSX has a secret OPEN_LIMIT: http://www.engardelinux.org/modules/index/list_archives.cgi?list=postfix-devel&page=0025.html&month=2013-03
classic apple, I'm not sure python even has an API to work around this.
James; Thanks for the heads up and sorry about the issue. It's annoying we can't get the limit via some kind of system call but in bcbio Jeff brought this up and we swapped target_procs to match OSX default since the 50k number was an arbitrarily high number:
Hopefully this fix will get it working without issues for OSX testing. Thanks again.
yeah that'll work. thanks!
This is a bit of a weird one and as far as I can tell is either an OSX or a Python bug, not yours. When trying to run locally on OSX, I get the following:
prints
I dug into it a bit, on my machine
resource.getrlimit(resource.RLIMIT_NOFILE)
gives(2560, 9223372036854775807)
. That later value is RLIMIT_INF, since OSX "doesn't enforce limits" on the number of open files. The scare quotes are because when I actually try to set the limits as cluster_helper does:There apparently is a limit, it just isn't advertised correctly. I'm working around this for now by chaning
cluster.cluster_cmd_argv
to lowertarget_procs
. It isn't really a production issue since you're it's unusual to be running analysis on OSX servers :). It is fairly annoying when testing though , so I figured I'd let you know in case you want to add some sort of hack around it to the package.cheers!