Open Maryom opened 6 years ago
Note that I used EFS to share the index between nodes in the cluster as you can see all files are there:
[hadoop@ip-172-31-2-103 efs]$ aws s3 cp s3://mariamup/nice/lambda_virus . --recursive
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa to ./lambda_virus.fa
download: s3://mariamup/nice/lambda_virus/lambda_virus.dict to ./lambda_virus.dict
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.ann to ./lambda_virus.fa.ann
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.amb to ./lambda_virus.fa.amb
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.bwt to ./lambda_virus.fa.bwt
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.sa to ./lambda_virus.fa.sa
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.fai to ./lambda_virus.fa.fai
download: s3://mariamup/nice/lambda_virus/lambda_virus.fa.pac to ./lambda_virus.fa.pac
[hadoop@ip-172-31-2-103 efs]$ ls
lambda_virus.dict lambda_virus.fa.amb lambda_virus.fa.bwt lambda_virus.fa.pac
lambda_virus.fa lambda_virus.fa.ann lambda_virus.fa.fai lambda_virus.fa.sa
Then, I ran:
yarn jar BigBWA-2.1.jar com.github.bigbwa.BigBWA -D mapreduce.input.fileinputformat.split.minsize=123641127 -D mapreduce.input.fileinputformat.split.maxsize=123641127 -D mapreduce.map.memory.mb=7500 -m -p --index /home/hadoop/efs/lambda_virus -r ERR000589.fqBD ExitERR00058
Then, I got error:
Error: /mnt/yarn/usercache/hadoop/appcache/application_1517229999487_0003/container_1517229999487_0003_01_000002/tmp/libbwa9035204717035358356.so: /mnt/yarn/usercache/hadoop/appcache/application_1517229999487_0003/container_1517229999487_0003_01_000002/tmp/libbwa9035204717035358356.so: invalid ELF header (Possible cause: endianness mismatch)
Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Is BigBWA requires 32-bit ? because my machines are 64-bit
Please, check your application logs to find the error:
yarn logs -applicationId application_1517157509312_0002
Where application_1517157509312_0002 must be replaced with the failed application ID
I checked here are some of the error:
[fclose] No such file or directory
[Java_com_github_sparkbwa_BwaJni_bwa_1jni] Error saving stdout.
19/03/10 16:54:47 INFO DAGScheduler: failed: Set()
hi,I have some problems.
The first one is :
19/03/28 13:53:47 INFO mapreduce.Job: map 100% reduce 0%
19/03/28 13:53:51 INFO mapreduce.Job: map 100% reduce 100%
19/03/28 13:53:51 INFO mapreduce.Job: Job job_1552831617942_0008 completed successfully
it said taht it had completed successfull but there
Map input records=131250
Map output records=1
Map output bytes=30
Map output materialized bytes=38
Input split bytes=108
Combine input records=0
Combine output records=0
Reduce input groups=1
Reduce shuffle bytes=38
Reduce input records=1
Reduce output records=0
it didn't have any output. I wad used two smaller fastq filess. then I through the files on HDFS:
hdfs dfs -ls /user/root/ExitERR000589/*
-rw-r--r-- 1 root supergroup 4919873 2019-03-28 13:53 /user/root/ExitERR000589/Input0_1.fq
-rw-r--r-- 1 root supergroup 4921106 2019-03-28 13:53 /user/root/ExitERR000589/Input0_2.fq
-rw-r--r-- 1 root supergroup 0 2019-03-28 13:53 /user/root/ExitERR000589/_SUCCESS
-rw-r--r-- 1 root supergroup 0 2019-03-28 13:53 /user/root/ExitERR000589/part-r-00000
so, It didn't have any output. But I don't know that where were the Input_1.fq and Input_1.fq files come from? I didn't do it.
The seconde one is about "--index". I don't know what function about this arguement have. I change this arguement's value then I get a same result. And it export so much exceptions about java.lang.ArrayIndexOutOfBoundsExceptio. But I run it at another computer it didn't appear,
Hi,
Thanks for this repo.
When I ran:
I got this error:
I used hadoop cluster on Amazon EMR.