NCI-CGR / IlluminaSequencingAnalysis

All Illumina Sequencing Related project from Xin will be recorded in this repo
0 stars 0 forks source link

Discussion: BWA performance evaluation (different number of computational resources) #53

Open lxwgcool opened 2 years ago

lxwgcool commented 2 years ago

In order to check the relationship between the number of cores and the performance of BWA (NVIDIA’s comparison does not make sense for me), I have launched 5 different jobs for comparison. These jobs are using 16, 24, 32, 48, 52 cores respectively.

Here is the output dir: /scratch/lix33/Data/BWA/0.7.17/Performance_Test

Here is the job script dir: /scratch/lix33/Data/BWA/Script/Performance_Comparison

The job details are below:

job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
4282558 0.60005 BWA_CPU_52 lix33        r     08/17/2022 13:05:56 [bigmem.q@node137.cm.cluster](mailto:bigmem.q@node137.cm.cluster)       52        
       Full jobname:     BWA_CPU_52
       Master Queue:     [bigmem.q@node137.cm.cluster](mailto:bigmem.q@node137.cm.cluster)
       Requested PE:     by_node 52
       Granted PE:       by_node 52
       Hard Resources:   
       Soft Resources:   
       Hard requested queues: bigmem.q
4282559 0.59221 BWA_CPU_48 lix33        r     08/17/2022 13:06:11 [bigmem.q@node138.cm.cluster](mailto:bigmem.q@node138.cm.cluster)       48        
       Full jobname:     BWA_CPU_48
       Master Queue:     [bigmem.q@node138.cm.cluster](mailto:bigmem.q@node138.cm.cluster)
       Requested PE:     by_node 48
       Granted PE:       by_node 48
       Hard Resources:   
       Soft Resources:   
       Hard requested queues: bigmem.q
4282560 0.56084 BWA_CPU_32 lix33        r     08/17/2022 13:06:26 [bigmem.q@node139.cm.cluster](mailto:bigmem.q@node139.cm.cluster)       32        
       Full jobname:     BWA_CPU_32
       Master Queue:     [bigmem.q@node139.cm.cluster](mailto:bigmem.q@node139.cm.cluster)
       Requested PE:     by_node 32
       Granted PE:       by_node 32
       Hard Resources:   
       Soft Resources:   
       Hard requested queues: bigmem.q
4282561 0.54515 BWA_CPU_24 lix33        r     08/17/2022 13:06:41 [bigmem.q@node140.cm.cluster](mailto:bigmem.q@node140.cm.cluster)       24        
       Full jobname:     BWA_CPU_24
       Master Queue:     [bigmem.q@node140.cm.cluster](mailto:bigmem.q@node140.cm.cluster)
       Requested PE:     by_node 24
       Granted PE:       by_node 24
       Hard Resources:   
       Soft Resources:   
       Hard requested queues: bigmem.q
4282562 0.52946 BWA_CPU_16 lix33        r     08/17/2022 13:06:56 [bigmem.q@node141.cm.cluster](mailto:bigmem.q@node141.cm.cluster)       16        
       Full jobname:     BWA_CPU_16
       Master Queue:     [bigmem.q@node141.cm.cluster](mailto:bigmem.q@node141.cm.cluster)
       Requested PE:     by_node 16
       Granted PE:       by_node 16
       Hard Resources:   
       Soft Resources:   
       Hard requested queues: bigmem.q
lxwgcool commented 2 years ago

Finished Case: Job 4282558 (BWA_CPU_52) Complete

Job 4282558 (BWA_CPU_52) Complete
 User             = lix33
 Queue            = [bigmem.q@node137.cm.cluster](mailto:bigmem.q@node137.cm.cluster)
 Host             = node137.cm.cluster
 Start Time       = 08/17/2022 13:05:56
 End Time         = 08/17/2022 17:03:04
 User Time        = 8:09:45:53
 System Time      = 00:16:03
 Wallclock Time   = 03:57:08
 CPU              = 8:10:01:57
 Max vmem         = 15.349G
 Exit Status      = 0

Finished Case: Job 4282559 (BWA_CPU_48) Complete

Job 4282559 (BWA_CPU_48) Complete
 User             = lix33
 Queue            = [bigmem.q@node138.cm.cluster](mailto:bigmem.q@node138.cm.cluster)
 Host             = node138.cm.cluster
 Start Time       = 08/17/2022 13:06:11
 End Time         = 08/17/2022 17:20:25
 User Time        = 8:08:07:33
 System Time      = 00:15:28
 Wallclock Time   = 04:14:14
 CPU              = 8:08:23:02
 Max vmem         = 14.579G
 Exit Status      = 0
lxwgcool commented 2 years ago

Job 4282560 (BWA_CPU_32) Complete

User = lix33 Queue = bigmem.q@node139.cm.cluster Host = node139.cm.cluster Start Time = 08/17/2022 13:06:26 End Time = 08/17/2022 19:18:31 User Time = 8:04:24:08 System Time = 00:14:16 Wallclock Time = 06:12:05 CPU = 8:04:38:24 Max vmem = 11.765G Exit Status = 0

Job 4282561 (BWA_CPU_24) Complete

User = lix33 Queue = bigmem.q@node140.cm.cluster Host = node140.cm.cluster Start Time = 08/17/2022 13:06:41 End Time = 08/17/2022 21:32:41 User Time = 8:08:41:29 System Time = 00:14:23 Wallclock Time = 08:26:00 CPU = 8:08:55:52 Max vmem = 10.427G Exit Status = 0

Job 4282562 (BWA_CPU_16) Complete

User = lix33 Queue = bigmem.q@node141.cm.cluster Host = node141.cm.cluster Start Time = 08/17/2022 13:06:56 End Time = 08/18/2022 00:54:35 User Time = 7:20:15:46 System Time = 00:18:29 Wallclock Time = 11:47:39 CPU = 7:20:34:16 Max vmem = 8.936G Exit Status = 0

lxwgcool commented 2 years ago

Performance Plot

image

image

lxwgcool commented 2 years ago

We can see, the running time from 32 CPU takes significant longer time than 48 CPU and 52 CPU. The memory usage for all testing cases is not a big issue for us, since our server can easily handle such consumption.

Therefore, the comparison provided by NVIDIA is not true at least for our cases.