Closed ethanwhite closed 10 years ago
Thanks Ethan! Can you please post the output of the file located here:
./mete-spatial/scripts/log_files/error_sorensen_abu_ucsc.log
Looks like a hidden dependency.
[1] "Empirical DDR analysis, ..."
Loading required package: permute
Loading required package: lattice
This is vegan 2.0-10
Loading required package: snow
Error in library(rlecuyer) : there is no package called ‘rlecuyer’
Execution halted
I just installed rlecuyer
and started again.
Thanks! I'll add that to the list of dependencies.
Next one:
[1] "Aggregating and exporting the empirical DDR results, ..."
Error in readChar(con, 5L, useBytes = TRUE) : cannot open the connection
Calls: getResults -> load -> readChar
In addition: Warning message:
In readChar(con, 5L, useBytes = TRUE) :
cannot open compressed file './sorensen/sorensen_ucsc_bisect_abu_indivRPM.Rdata', probable reason 'No such file or directory'
Execution halted
[1] "Aggregating and exporting the simulated DDR results, ..."
[1] "Aggregating and exporting the simulated DDR results, complete!"
[1] "Comparing empirical and METE DDR, ..."
Error in readChar(con, 5L, useBytes = TRUE) : cannot open the connection
Calls: load -> readChar
In addition: Warning message:
In readChar(con, 5L, useBytes = TRUE) :
cannot open compressed file './sorensen/empirSorBin_bisect.Rdata', probable reason 'No such file or directory'
Execution halted
[1] "Generating figures, ..."
Error in readChar(con, 5L, useBytes = TRUE) : cannot open the connection
Calls: load -> readChar
In addition: Warning message:
In readChar(con, 5L, useBytes = TRUE) :
cannot open compressed file './sorensen/empirSorAbu_bisect.Rdata', probable reason 'No such file or directory'
Execution halted
[1] "Figures are stored in the directory ./figs/"
[1] "Analysis Complete"
user system elapsed
402.678 69.923 1046.413
The same log file as before contains:
[1] "Empirical DDR analysis, ..."
Loading required package: permute
Loading required package: lattice
This is vegan 2.0-10
Loading required package: snow
Created directory: /home/ethan/.sfCluster/restore
R Version: R version 3.0.1 (2013-05-16)
snowfall 1.84-6 initialized (using snow 0.3-13): parallel execution on 2 CPUs.
Warning message:
In searchCommandline(parallel, cpus = cpus, type = type, socketHosts = socketHosts, :
Unknown option on commandline: --file
Source ./scripts/spat_functions.R loaded.
Source ./scripts/spat_functions.R loaded in cluster.
> SpatPerm2D <- function(psp, shiftpos = NULL, rotate = NULL,
+ meth = "shift", sp = FALSE) {
+ n <- dim(psp)[2]
+ if (length(dim(psp)) = .... [TRUNCATED]
...
> download_data = function(urls, delim, output_path) {
+ require(RCurl)
+ for (i in seq_along(urls)) {
+ temp = getURL(urls[i])
+ .... [TRUNCATED]
> capwords = function(s, strict = FALSE) {
+ cap = function(s) paste(toupper(substring(s, 1, 1)), {
+ s = substring(s, 2)
+ if (st .... [TRUNCATED]
Killed
I think this is an example of the strange crashing I was telling you about. Alternatively its possible your machine doesn't have enough RAM to compute the large distance matrices across two cores but this seems unlikely.
I'm going to change the ddr_run_all.R code so that it runs in serial only. This way we can avoid any strange complications with running the code in parallel.
I have 8 GB plus 4 more of swap, but I was definitely using most of it. Let me know when the serialized version is up and I'll give it another run.
I ran into the memory problem on my local machine too, where R reported that the vector was too large to handle.
Thanks for the report Xiao!
On Tue, Mar 25, 2014 at 3:20 PM, Xiao Xiao notifications@github.com wrote:
I ran into the memory problem on my local machine too, where R reported that the vector was too large to handle.
Reply to this email directly or view it on GitHubhttps://github.com/weecology/mete-spatial/issues/2#issuecomment-38622292 .
Daniel J. McGlinn, PhD Postdoctoral Researcher Utah State University Department of Biology, BNR 132 Logan, UT 84322-5305 http://mcglinn.web.unc.edu/ cell: 405-612-1780
Xiao how much RAM do you have on your local machine?
I pushed the changes needed to run in serial, commit : 7fae999b8c09a86beb312549789ed092654c3764. I'm running the code on my end right now to make sure something new didn't pop up. I'll let you know when its time for your to give it another shot. Thanks for all your help!!
It's 2GB.
On Tue, Mar 25, 2014 at 3:30 PM, Dan McGlinn notifications@github.comwrote:
Xiao how much RAM do you have on your local machine?
Reply to this email directly or view it on GitHubhttps://github.com/weecology/mete-spatial/issues/2#issuecomment-38623422 .
Ok everything looks good on my end. Feel free to do pull an update from the repo and let 'er rip.
That's definitely working better. Now I'm running into problems starting at the empirical-theoretical DDR comparison:
...
[1] "Submitting jobs to compute the METE simulated DDRs, complete!"
[1] "Aggregating and exporting the empirical DDR results, ..."
[1] "Aggregating and exporting the empirical DDR results, complete!"
[1] "Aggregating and exporting the simulated DDR results, ..."
[1] "Aggregating and exporting the simulated DDR results, complete!"
[1] "Comparing empirical and METE DDR, ..."
null device
1
null device
1
[1] "Comparing empirical and METE DDR, complete!"
[1] "Generating figures, ..."
Error in get_ddr_resid(empirSorAbu, simSorAbuFixed) :
object 'out' not found
Execution halted
[1] "Figures are stored in the directory ./figs/"
[1] "Analysis Complete"
user system elapsed
5516.446 717.001 6813.792
This is a strange error. 'out'
is an objected created internally within the function get_ddr_resid()
. Ethan can you please email me the following two files
./mete-spatial/sorensen/empirSorAbu_bisect.Rdata ./mete-spatial/simulated_empirical_results_bisect.Rdata
OK, they're on their way.
Thanks Ethan. The problem appears to be that the file
./mete-spatial/simulated_empirical_results_bisect.Rdata
was not populated with any results besides the empty lists which should contain the results of the simulated DDR analyses. Can you take a look in these following log files and post any errors:
./mete-spatial/scripts/log_files/error_sim_analysis_ucsc_empirSAD.log ./mete-spatial/scripts/log_files/error_sim_analysis_ucsc_meteSAD.log ./mete-spatial/scripts/log_files/error_sim_analysis_oosting_empirSAD.log ./mete-spatial/scripts/log_files/error_sim_analysis_oosting_meteSAD.log
Thanks!
Same basic error in all of those files:
[1] "METE simulated DDR analysis, ..."
Loading required package: permute
Loading required package: lattice
This is vegan 2.0-10
Loading required package: methods
Loading required package: bigmemory.sri
Loading required package: BH
bigmemory >= 4.0 is a major revision since 3.1.2; please see packages
biganalytics and and bigtabulate and http://www.bigmemory.org for more information.
Error in read.big.matrix(file.path("./comms", fileName), header = TRUE, :
The file ./comms/simulated_comms_oosting_C10_B12_grid.txt could not be found
Calls: read.big.matrix -> read.big.matrix
Execution halted
and there is nothing in the ./comms
folder.
Ok then can you please take a look at
./mete-spatial/scripts/log_files/error_mete_comm_gen_ucsc_empirSAD.log ./mete-spatial/scripts/log_files/error_mete_comm_gen_ucsc_meteSAD.log ./mete-spatial/scripts/log_files/error_mete_comm_gen_oosting_empirSAD.log ./mete-spatial/scripts/log_files/error_mete_comm_gen_oosting_meteSAD.log
I think we've almost tracked the problem down! Thanks again for your hard work on this.
On Thu, Mar 27, 2014 at 4:13 PM, Ethan White notifications@github.comwrote:
Same basic error in all of those files:
Loading required package: permute Loading required package: lattice This is vegan 2.0-10 Loading required package: methods Loading required package: bigmemory.sri Loading required package: BH bigmemory >= 4.0 is a major revision since 3.1.2; please see packages biganalytics and and bigtabulate and http://www.bigmemory.org for more information. Error in read.big.matrix(file.path("./comms", fileName), header = TRUE, : The file ./comms/simulated_comms_oosting_C10_B12_grid.txt could not be found Calls: read.big.matrix -> read.big.matrix Execution halted and there is nothing in the `./comms` folder. ## Reply to this email directly or view it on GitHubhttps://github.com/weecology/mete-spatial/issues/2#issuecomment-38867566 .
Daniel J. McGlinn, PhD Postdoctoral Researcher Utah State University Department of Biology, BNR 132 Logan, UT 84322-5305 http://mcglinn.web.unc.edu/ cell: 405-612-1780
They all look like:
Generating simulated community, ...
Traceback (most recent call last):
File "./scripts/spat_community_generation.py", line 124, in <module>
output_results(comms, S, N, ncomm, bisec, transect, abu, shrt_name)
File "./scripts/spat_community_generation.py", line 68, in output_results
out[i,j,] = [i + 1] + comms[i][j][0:2] + comms[i][j][2]
IndexError: list index out of range
Are you sure you have the most recent version of mete installed on your machine?
I just checked and the local code was up to date (for both METE and macroecotools). It's possible that I'd temporarily installed an older version of something at some point so I've just rerun setup.py for both modules and will rerun the code again tonight and hope that takes care of it.
thanks Ethan! I really appreciate your help here.
On Thu, Mar 27, 2014 at 8:27 PM, Ethan White notifications@github.comwrote:
I just checked and the local code was up to date (for both METE and macroecotools). It's possible that I'd temporarily installed an older version of something at some point so I've just rerun setup.py for both modules and will rerun the code again tonight and hope that takes care of it.
Reply to this email directly or view it on GitHubhttps://github.com/weecology/mete-spatial/issues/2#issuecomment-38883378 .
Daniel J. McGlinn, PhD Postdoctoral Researcher Utah State University Department of Biology, BNR 132 Logan, UT 84322-5305 http://mcglinn.web.unc.edu/ cell: 405-612-1780
Well, sorry, it looks like I must have temporarily installed an older version of mete
a little while ago when the NESCent folks were doing some reproducibility work and had some questions about White et al. 2012 (which relies on a tag in mete
). Apologies.
I now get beautiful figures and everything! Just one more error on an undocumented R package dependency for hash
:
[1] "Comparing empirical and METE DDR, complete!"
[1] "Generating figures, ..."
null device
1
null device
1
null device
1
null device
1
null device
1
[1] "Note: Points will be dropped where the metric equals zero becuase the y-axis is log transformed"
[1] "Note: Points will be dropped where the metric equals zero becuase the y-axis is log transformed"
[1] "Note: Points will be dropped where the metric equals zero becuase the y-axis is log transformed"
[1] "Note: Points will be dropped where the metric equals zero becuase the y-axis is log transformed"
null device
1
null device
1
null device
1
null device
1
null device
1
Error in library(hash) : there is no package called ‘hash’
Calls: source -> withVisible -> eval -> eval -> library
Execution halted
[1] "Figures are stored in the directory ./figs/"
[1] "Analysis Complete"
user system elapsed
10971.958 1223.257 12991.410
Looks like this is good to go for me!
hash
issue fixed in cd634c63c0b6bdcb886d031f5b5b4f97375e10a0.
I have confirmed that with hash
installed the figures script now runs all the way through and produces the last couple of missing figures.
The one last thing I noticed is that I guess we've got some zeros values for things that are going on log-axes? I couldn't immediately figure out what those would be so I thought I'd flag it hear just to be safe. As long as those aren't an issue I think this is ready to go.
Nice job getting this all cleaned up and apologies for taking us on a side trip with something that was working fine.
That's great news that everything came through. The log warnings are for a set of plots that include the lower 25% quantile of the sorensen index which for fine grains is zero and thus the error when plotting on log-log axes. The plots are still created its just that you lose that lower 25% quantile. So these are warnings that can be ignored. Thanks for all your hard work testing the code out. Also I'm impressed that your machine only took about 4 hours to run the code. On wash this took about 11 hours.
After fixing the first round of issues this morning I've run into some more. Here's the full output including the error messages: