XanaduAI / strawberryfields

Strawberry Fields is a full-stack Python library for designing, simulating, and optimizing continuous variable (CV) quantum optical circuits.
https://strawberryfields.ai
Apache License 2.0
754 stars 191 forks source link

Logging job ID with engine.run #347

Closed antalszava closed 4 years ago

antalszava commented 4 years ago

Context: When using RemoteEngine.run for submitting jobs, it is desirable to also get the ID of the Job.

Description of the Change:

Benefits: Users will receive a notice both when submitting a job with the job ID and when there is a job that failed.

Possible Drawbacks: N/A

Related GitHub Issues: N/A

codecov[bot] commented 4 years ago

Codecov Report

Merging #347 into master will not change coverage by %. The diff coverage is n/a.

@@           Coverage Diff           @@
##           master     #347   +/-   ##
=======================================
  Coverage   97.71%   97.71%           
=======================================
  Files          52       52           
  Lines        6398     6398           
=======================================
  Hits         6252     6252           
  Misses        146      146           

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 8cb5a6d...8cb5a6d. Read the comment docs.

antalszava commented 4 years ago

@josh146 thank you so much for your review and comments! Your question about caplog made me realize that some tests were faulty due to the fact that trying to set the level logging.basicConfig(filename=test_file, level=level) will take no effect within the caplog context. This spiralled down into having to switch to a stricter logic and adjusting the tests!

Short summary The has_level_handler function was made into the logging_handler_defined function, that checks for any handler that was defined (so far it only checked for the level set for handlers).

The entire process of creating a logger by default was made as strict as possible. If any user-defined logger configuration is found, then nothing is configured by SF when a logger is created internally.

More details

With the previous solution, we checked two things:

These conditions, however, did not take into consideration, if logging configurations were added by the user not on the module level, but rather on the logging.root logger level. An example is having logging.basicConfig(filename=test_file, level=level), which essentially just creates a handler for logging.root and sets the level of the root logger (a subtlety here is that this sets the level for the logger and not simply for the handler itself). *

As the root logger has a WARNING level by default, any module-specific logger will inherit the same as effective level. The effective level is just a way of saying that even though a level might not have been set, we are still seeking a level value recursively from the ancestor loggers.

Therefore, the logger is configured if and only if the following are true:

This overall is meant to describe, that provided that the user configured at least one piece of detail for the logger, then SF will not configure anything for them.

*Example:

import logging
import sys

assert logging.root.level == logging.WARNING
logging.basicConfig(stream=sys.stderr, level=logging.INFO)

assert len(logging.root.handlers) == 1
assert logging.root.handlers[0].stream.name == "stderr"
assert logging.root.level == logging.INFO
logging.root.handlers[0].level == logging.NOTSET