Closed martinezvbs closed 3 months ago
I think 40K is still a large number of rows, especially for brew
. I'm not sure about runEdgeR
(as that is based on the edgeR
package, so if that command is also crashing then 40K might just be too large of a dataset. The parallelization occurs on a per-sample basis, so if both commands are crashing, then to me that suggests you may want to break your 40k dataset into smaller datasets (probably around 3000 or less is what we've tested, but you could try like 10k). Like you mentioned you could reduce the rows by setting some counts threshold, using some outlier detection method, or by just partitioning the dataset. Hope this helps!
Hi,
I am trying to use BEER for a matrix, see below:
On R, I was running the following for the differential analysis
However, every time that I try to run the above, R stops working (I also tried with MultiCore for Parallel). I migrated to R server (30 cores per task and a lot of memory). I was using the following code but stills crashes.
In this case, what it would be good to change? I was thinking in reducing the number of rows, however, I would like to try something else before trimming the counts.
My system: R 4.3.2 / BEER 1.6.0 / PhIPData 1.10.0 /
Thanks!