Closed katiaplopes closed 3 years ago
I've just tried and I got this error. It couldn't be memory because we are using 1TB.
caught segfault address 0xc0, cause 'memory not mapped'
Traceback: 1: extreme_deconvolution_rcpp(ydata, tycovar, projection, logweights, xamp, xmean, unlist(lapply(xcovar, t)), fixamp, fixmean, fixcovar, tol, maxiter, likeonly, w, clog, splitnmerge, clog2, noprojection, diagerrors, noweight) 2: extreme_deconvolution(data$Bhat[subset, ], data$Shat[subset, ]^2, xamp = pi_init, xmean = matrix(0, nrow = K, ncol = R), xcovar = Ulist_init, fixmean = TRUE, ...) 3: ed_wrapper(data, Ulist_init, subset, ...) 4: cov_ed(m_data, U.pca, subset = strong.subset)
Possible actions: 1: abort (with core dump, if enabled) 2: normal R exit 3: exit R without saving workspace 4: exit R saving workspace
@katiaplopes I suggest you try use even less effects to learn the sharing pattern from data -- it does not need 4M eQTLs to learn it well. At least a much smaller strong set will rule out the memory problem and let's see if the GSL complaint still persists.
Thank you a lot @gaow . I'll try it!
I've started applying mashR for eQTL analysis following this website (https://stephenslab.github.io/mashr/articles/eQTL_outline.html) but I got the following error when I try to run the code for Data driven covariances:
U.ed = cov_ed(data.strong, U.pca)
gsl: lu.c:266: ERROR: matrix is singular Default GSL error handler invoked. Aborted
We are using all eQTLs found with fastQTL nominal pass (~70M). I've already checked the input matrices (Bhat and Shat) and they look like the example. Also, these matrices doesn’t have NAs or Inf values. Even with a very stringent threshold when selecting the strong subset, we reduced to ~4M eQTLs, but we still got the same error. I’m wondering if we are doing it correctly. I would appreciate if you could give some advice.
Thanks!