Closed sungeunbae closed 1 year ago
Honestly not sure why the get_basic_logger isnt working. I think from memory which is a little shaky with the logging but it should be placed in the main logger when you run execute run_cybershake.
Would be better to see if we can fix the get_basic_logger and logging if it is an issue as it is used alot throughout the workflow and the message stuff I find a bit messy
Yes, agreed. hope I can get to the bottom of it, as you pointed out, the logging is a little messy.
As ithe logging issue happens only when add_to_mgmt_queue.py is executed as a standalone script (from slurm or PBS ), I think it is probably best to pipe the logging to stdout, so that it will be added to .out, not as a separate log file or not to one of the log files in the root (finding the correct log file name at the stage is not super-straightforward when this script is called from slurm/PBS script). I tried this avenue, but not successful and ended up with this quick and dirty solution.
Your feedback reinforced my initial inclination to fix this properly. I'm happy to withdraw this PR and try a better solution.
Lets' make this PR pending
When add_to_mgmt_queue.py is called from Slurm/PBS script, the logger add_to_queue() tied to masterlog*.txt file in the cybershake root directory is not accessible. So a status change triggered by the script is never logged.
The idea was to print these messages to stdout. Then the .out file from the scheduler job should have these log messages.
Here is a question: by default, a basic logger (= qclogging.get_basic_logger() ) is used, but no idea where the output goes to. I tried to use get_logger() with stdout_printer switch option, but to no avail.
Instead, I devised this quick and dirty solution. Any suggestions welcome.
At least, I get thse messages and I know the update files are indeed written to where it is supposed to.