vajda-lab / scc-api

RESTful API for high performance computing centers
0 stars 1 forks source link

Collect command output from your Linux commands #27

Closed jefftriplett closed 3 years ago

jefftriplett commented 3 years ago

To finish off Celery, we would like to collect the output of the commands for each command that you use to:

jefftriplett commented 3 years ago

cc @kojoidrissa for details

Amanda-Wakefield commented 3 years ago

submitting a job

$ qsub -t 1-2 atlas_whole.py
Your job-array 5290723.1-2:1 ("atlas_whole.py") has been submitted

list jobs

$ qstat -j 5290723
==============================================================
job_number:                 5290723
exec_file:                  job_scripts/5290723
submission_time:            Fri Mar 19 11:03:19 2021
owner:                      awake
uid:                        255619
group:                      docking
gid:                        88667
sge_o_home:                 /usr3/bustaff/awake
sge_o_log_name:             awake
sge_o_path:                 /share/pkg.7/anaconda3/5.2.0/install/bin:/usr/java/default/jre/bin:/usr/java/default/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/dell/srvadmin/bin:/bin:/usr3/bustaff/awake/bin:.
sge_o_shell:                /bin/bash
sge_o_workdir:              /projectnb/docking/awake/gpcr_projects/keseru_new_extra_10
sge_o_host:                 scc1
account:                    sge
cwd:                        /projectnb/docking/awake/gpcr_projects/keseru_new_extra_10
hard resource_list:         no_gpu=TRUE,h_rt=43200
soft resource_list:         buyin=TRUE
mail_list:                  awake@scc1.bu.edu
notify:                     FALSE
job_name:                   atlas_whole.py
jobshare:                   0
env_list:                   PATH=/share/pkg.7/anaconda3/5.2.0/install/bin:/usr/java/default/jre/bin:/usr/java/default/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/opt/dell/srvadmin/bin:/bin:/usr3/bustaff/awake/bin:.
script_file:                scripts/atlas_whole.py
parallel environment:  omp16 range: 16
verify_suitable_queues:     2
project:                    docking
job-array tasks:            1-2:1
usage    1:                 cpu=00:00:00, mem=0.00000 GBs, io=0.00000, vmem=N/A, maxvmem=N/A
usage    2:                 cpu=00:00:00, mem=0.00000 GBs, io=0.00000, vmem=N/A, maxvmem=N/A
scheduling info:            (Collecting of scheduler job information is turned off)

list jobs for user

$ qstat -u awake
job-ID  prior   name       user         state submit/start at     queue                          slots ja-task-ID 
-----------------------------------------------------------------------------------------------------------------
5290728 0.10087 atlas_whol awake        r     03/19/2021 11:09:21 neuro-pub@scc-md8.scc.bu.edu      16 1
5290728 0.10087 atlas_whol awake        r     03/19/2021 11:09:21 ecoggroup-pub@scc-gc3.scc.bu.e    16 2

deleting a job

$ qdel 5290728
awake has registered the job-array task 5290728.1 for deletion
awake has registered the job-array task 5290728.2 for deletion
jefftriplett commented 3 years ago

Resources

kojoidrissa commented 3 years ago

Still looking at the BU/Grid Engine/SCC docs, but this is done.