jianyangqt / gcta

GCTA software
GNU General Public License v3.0
87 stars 26 forks source link

Segmentation fault (core dumped) #105

Open Elijah-Ugoh opened 1 month ago

Elijah-Ugoh commented 1 month ago

Hello,

I keep getting this error, "Segmentation fault (core dumped)" every time I run --make-grm, "./gcta64 --bfile pruned_data --make-grm --out GRM_matrix --autosome --maf 0.05".

I have 1890 samples and about 174000 markers in my files. I have tried with up to 30 threads, but still getting the error.

Would appreciate any help to figure out the issue. Thanks.

ibrahimuddin commented 1 month ago

I'm having this exact issue as well with a single sample

Elijah-Ugoh commented 1 month ago

I'm having this exact issue as well with a single sample

Hello Ibrahim, did you get any solution yet?

ibrahimuddin commented 1 month ago

unfortunately, no :(. it seems other's are having the same issue, it may well be a bug in the gcta source code.

Elijah-Ugoh commented 1 month ago

Okay, thanks for the feedback.

evanmaclean commented 3 weeks ago

I am also encountering this issue on CentOS Linux 7 (Core), x86-64

longmanz commented 3 weeks ago

Hi everyone, have you tried the other versions of GCTA at (https://yanglab.westlake.edu.cn/software/gcta/#Download)? Did you all encounter this issue while computing the GRM as @Elijah-Ugoh did? Or did you encounter this during other procedures ?

evanmaclean commented 3 weeks ago

Hi everyone, have you tried the other versions of GCTA at (https://yanglab.westlake.edu.cn/software/gcta/#Download)? Did you all encounter this issue while computing the GRM as @Elijah-Ugoh did? Or did you encounter this during other procedures ?

For me this occurs while the GRM is being calculated within --mlma-loco

I do seem to be able to make a GRM using the --make-grm command though. I am running the latest version noted at the URL you mention:

version v1.94.1 Linux

longmanz commented 1 week ago

Hi @evanmaclean, Calculating GRM on the fly within --mlma-loco will cost a large amount of memory. Your workaround seems good to me and hopefully it can avoid the core dump error.

Elijah-Ugoh commented 1 week ago

Okay, I'm still having the"core dinoed error". So, I'll just paste my job script here. Perhaps, the one who's might be doing things the wrong way:

!/bin/bash

SBATCH -J GCTA_GRM

SBATCH -o GRM_%j.out

SBATCH -e GRM_%j.err

SBATCH --time=03:00:00

SBATCH --cpus-per-task=2

SBATCH --mem=20G

THis script runs the GCTA software for calculating genetic relatedness matrix among 1864 indiduals

using about 17 million snps obtained from the plink binary input files

write to standard output

cat $0

load gcta module

module load GCC/12.3.0 module load GCTA/1.94.1

copy input data to node local disk

cp -p pruned_data.bim pruned_data.bed pruned_data.fam $NAISS_TMP

change to the execution directory

cd $NAISS_TMP

run gcta to compute genetic relatedness matrix

partition the GRM into 3 parts for resource efficiency

gcta64 --bfile pruned_data --make-grm-part 5 1 --out GRM --autosome --maf 0.05 --thread-num 1 gcta64 --bfile pruned_data --make-grm-part 5 2 --out GRM --autosome --maf 0.05 --thread-num 1 gcta64 --bfile pruned_data --make-grm-part 5 3 --out GRM --autosome --maf 0.05 --thread-num 1 gcta64 --bfile pruned_data --make-grm-part 5 4 --out GRM --autosome --maf 0.05 --thread-num 1 gcta64 --bfile pruned_data --make-grm-part 5 5 --out GRM --autosome --maf 0.05 --thread-num 1

merge all parts together

cat GRM.part5.grm.id > GRM.grm.id cat GRM.part5.grm.bin > GRM.grm.bin cat GRM.part5*.grm.N.bin > GRM.grm.N.bin

run the relatedness threshold cutoff

gcta64 --grm-cutoff 0.125 --grm GRM --out GRM_0125 --make-grm --thread-num 1

rescue results to the submission directory

cp -p GRM* $SLURM_SUBMIT_DIR