macs3-project / MACS

MACS -- Model-based Analysis of ChIP-Seq
https://macs3-project.github.io/MACS/
BSD 3-Clause "New" or "Revised" License
698 stars 268 forks source link

OSError: [Errno 122] Disk quota exceeded #424

Open hemantgujar opened 3 years ago

hemantgujar commented 3 years ago

Hi

I am receiving the following error message. I have 5TB space in my tempdir. Can anyone help what the issue is ? Thanks. The same command works when I did not have --broad

$ macs2 --version macs2 2.2.7.1

INFO @ Tue, 01 Dec 2020 23:42:29: Command line: callpeak --broad --tempdir /scratch/xxx/ -t /project/xxx_71/xxx/xx1/A1.clean_sorted.bam -c /project/xxx_71/xxx/xx1/F1.clean_sorted.bam -f BAMPE -g hs --outdir /project/xxx_71/xxx/xx1/broad -n A1.F1 -B -q 0.01 ARGUMENTS LIST: name = A1.F1 format = BAMPE ChIP-seq file = ['/project/xxx_71/xxx/xx1/A1.clean_sorted.bam'] control file = ['/project/xxx_71/xxx/xx1/F1.clean_sorted.bam'] effective genome size = 2.70e+09 band width = 300 model fold = [5, 50] qvalue cutoff for narrow/strong regions = 1.00e-02 qvalue cutoff for broad/weak regions = 1.00e-01 The maximum gap between significant sites is assigned as the read length/tag size. The minimum length of peaks is assigned as the predicted fragment length "d". Larger dataset will be scaled towards smaller dataset. Range for calculating regional lambda is: 1000 bps and 10000 bps Broad region calling is on Paired-End mode is on

INFO @ Tue, 01 Dec 2020 23:42:29: #1 read fragment files... INFO @ Tue, 01 Dec 2020 23:42:29: #1 read treatment fragments... INFO @ Tue, 01 Dec 2020 23:42:34: 1000000 INFO @ Tue, 01 Dec 2020 23:42:39: 2000000

....

INFO @ Tue, 01 Dec 2020 23:49:26: 93000000 INFO @ Tue, 01 Dec 2020 23:49:31: 94000000 INFO @ Tue, 01 Dec 2020 23:49:36: 95000000 INFO @ Tue, 01 Dec 2020 23:49:36: 95032218 fragments have been read. INFO @ Tue, 01 Dec 2020 23:50:56: #1.2 read input fragments... INFO @ Tue, 01 Dec 2020 23:51:01: 1000000 INFO @ Tue, 01 Dec 2020 23:51:06: 2000000

....

INFO @ Wed, 02 Dec 2020 00:00:40: 124000000 INFO @ Wed, 02 Dec 2020 00:00:44: 125000000 INFO @ Wed, 02 Dec 2020 00:00:49: 126000000 INFO @ Wed, 02 Dec 2020 00:00:54: 127000000 INFO @ Wed, 02 Dec 2020 00:00:54: 127040449 fragments have been read. INFO @ Wed, 02 Dec 2020 00:02:31: #1 mean fragment size is determined as 285.8 bp from treatment INFO @ Wed, 02 Dec 2020 00:02:31: #1 note: mean fragment size in control is 247.8 bp -- value ignored INFO @ Wed, 02 Dec 2020 00:02:31: #1 fragment size = 285.8 INFO @ Wed, 02 Dec 2020 00:02:31: #1 total fragments in treatment: 95032218 INFO @ Wed, 02 Dec 2020 00:02:31: #1 user defined the maximum fragments... INFO @ Wed, 02 Dec 2020 00:02:31: #1 filter out redundant fragments by allowing at most 1 identical fragment(s) INFO @ Wed, 02 Dec 2020 00:05:58: #1 fragments after filtering in treatment: 67559775 INFO @ Wed, 02 Dec 2020 00:05:58: #1 Redundant rate of treatment: 0.29 INFO @ Wed, 02 Dec 2020 00:05:58: #1 total fragments in control: 127040449 INFO @ Wed, 02 Dec 2020 00:05:58: #1 user defined the maximum fragments... INFO @ Wed, 02 Dec 2020 00:05:58: #1 filter out redundant fragments by allowing at most 1 identical fragment(s) INFO @ Wed, 02 Dec 2020 00:10:46: #1 fragments after filtering in control: 115046583 INFO @ Wed, 02 Dec 2020 00:10:46: #1 Redundant rate of control: 0.09 INFO @ Wed, 02 Dec 2020 00:10:46: #1 finished! INFO @ Wed, 02 Dec 2020 00:10:46: #2 Build Peak Model... INFO @ Wed, 02 Dec 2020 00:10:46: #2 Skipped... INFO @ Wed, 02 Dec 2020 00:10:46: #3 Call peaks... INFO @ Wed, 02 Dec 2020 00:10:46: #3 Call broad peaks with given level1 -log10qvalue cutoff and level2: 2.000000, 1.000000... INFO @ Wed, 02 Dec 2020 00:10:46: #3 Pre-compute pvalue-qvalue table... INFO @ Wed, 02 Dec 2020 00:26:01: #3 In the peak calling step, the following will be performed simultaneously: INFO @ Wed, 02 Dec 2020 00:26:01: #3 Write bedGraph files for treatment pileup (after scaling if necessary)... A1.F1_treat_pileup.bdg INFO @ Wed, 02 Dec 2020 00:26:01: #3 Write bedGraph files for control lambda (after scaling if necessary)... A1.F1_control_lambda.bdg INFO @ Wed, 02 Dec 2020 00:26:01: #3 Call peaks for each chromosome... INFO @ Wed, 02 Dec 2020 00:42:19: #4 Write output xls file... /project/xxx_71/xxx/xx1/broad/A1.F1_peaks.xls Traceback (most recent call last): File "/project/xxx_71/xxx/xx1/software/MACS-master/bin/macs2", line 4, in import('pkg_resources').run_script('MACS2==2.2.7.1', 'macs2') File "/project/xxx_71/xxx/xx1/software/python_package/pkg_resources/init.py", line 650, in run_script self.require(requires)[0].run_script(script_name, ns) File "/project/xxx_71/xxx/xx1/software/python_package/pkg_resources/init.py", line 1446, in run_script exec(code, namespace, namespace) File "/project/xxx_71/xxx/xx1/software/MACS-master/lib/python3.7/site-packages/MACS2-2.2.7.1-py3.7-linux-x86_64.egg/EGG-INFO/scripts/macs2", line 653, in main() File "/project/xxx_71/xxx/xx1/software/MACS-master/lib/python3.7/site-packages/MACS2-2.2.7.1-py3.7-linux-x86_64.egg/EGG-INFO/scripts/macs2", line 51, in main run( args ) File "/project/xxx_71/xxx/xx1/software/MACS-master/lib/python3.7/site-packages/MACS2-2.2.7.1-py3.7-linux-x86_64.egg/MACS2/callpeak_cmd.py", line 297, in run peakdetect.peaks.write_to_xls(ofhd_xls, name = options.name.encode()) File "MACS2/IO/PeakIO.pyx", line 1175, in MACS2.IO.PeakIO.BroadPeakIO.write_to_xls OSError: [Errno 122] Disk quota exceeded

taoliu commented 3 years ago

It seems that you have a 'quota' to your account. Although the hard disk still has free space, the system admin may set a limit of disk usage that a user can use. If this is a Linux machine, use the command quota to check.

hemantgujar commented 3 years ago

you mean something like this ; typing quota does not give any output. I am using scratch2 and project directory which have plenty of space.

$ quota $ myquota

/home1/xxx user/group size chunk files name id used hard used hard
xxx 316413 729.95 MiB 100.00 GiB 5593 2000000

/scratch/xxx user/group size chunk files name id used hard used hard
xxx 316413 0 Byte 10.00 TiB 0 unlimited

/scratch2/xxx user/group size chunk files name id used hard used hard
xxx 316413 0 Byte 30.00 TiB 0 unlimited

/project/xxx_71 user/group size chunk files name id used hard used hard
xxx_71 32561 3.25 TiB 5.00 TiB 311373 30000000
/project/xx_56 user/group size chunk files name id used hard used hard
xx_56 32575 2.23 TiB 10.00 TiB 9417 60000000