in troubleshooting make sure there are docker shut down instructions.
If ever you need to shut down a pipeline running in docker, run docker container ls to see the containers currently running. There are probably two: the manager, and one worker. The manager is the one running biolockj, the worker was probably launcher more recently and is probably running bash or R or Rscript (depending on the module). Get the id of the worker script, and run docker stop <workerID> . Within a minute or so, the biolockj manager will detect that the worker container has stopped, and will flag that current module and the pipeline as "biolockjFailed".
Add notes about running on pc, need to use -f in all likelihood must be in docker (-d) and need t specify $BLJ_PROJ with --blj-proj arg.
Make sure restart directions exists. And that there is emphasis on the (likely) need to reference the original config file).
in troubleshooting make sure there are docker shut down instructions.
If ever you need to shut down a pipeline running in docker, run
docker container ls
to see the containers currently running. There are probably two: the manager, and one worker. The manager is the one running biolockj, the worker was probably launcher more recently and is probably running bash or R or Rscript (depending on the module). Get the id of the worker script, and rundocker stop <workerID>
. Within a minute or so, the biolockj manager will detect that the worker container has stopped, and will flag that current module and the pipeline as "biolockjFailed".Add notes about running on pc, need to use -f in all likelihood must be in docker (-d) and need t specify $BLJ_PROJ with --blj-proj arg.
Make sure restart directions exists. And that there is emphasis on the (likely) need to reference the original config file).