Closed mitar closed 6 years ago
+1
Hi, to disable the printing, you can use:
>>> print("You can see me")
>>> with tl.ops.suppress_stdout():
>>> print("You can't see me")
>>> print("You can see me")
hi @zsdonghao. it's not about how to suppress print, it's about logging info with more control.If we use logging, then we can config what and where to output .When we want to deploy tl to production environment as service, we need it to log the normal and exception messages.Besides we found some globals defined in layers.py
set_keep = globals()
set_keep['_layers_name_list'] =[]
set_keep['name_reuse'] = False
,this piece of code make tl hard to deploy as service for multiple users.
I second everything @haiy said. Both of those two pieces have bitten us as well.
@haiy For production, as I know, people usually use TensorFlow Serving, threading is not necessary. May be I didn't get you point? or there are some reasons threading is better?
@mitar Thanks, I will think about it carefully ~
Totally agree with the concerns over using print instead of logging. Things like suppress_stdout
and disable_print
suppresses stdout globally regardless of whether the prints are from the library, which feels quite overreaching. Logging seems to be more suitable here; see When to use logging.
I think we could just add the following to all the modules that needs logging (to create a logger hierarchy):
import logging
logger = logging.getLogger(__name__)
And use logger.{debug, info, warning, ...}
instead of print
, giving users full control over how and the logs are written.
Also not sure why we need global variables to prevent layer name collisions, since after all the use of TensorFlow's variable scopes already handles that.
Maybe it's also easier to just pass reuse
as a parameter to the constructors of layers, instead of maintaining it as the global state set_keep['name_reuse']
, which is almost not respected anywhere (except for TimeDistributedLayer
) anyways.
@tomtung @mitar @haiy we are preparing for a PR to replace all the print with logging in the library.
Just to repeat what I proposed in #306
The idea would be to make this output optional (default = True or False). I think there could be different ways to do this.
Simple and backward compatible, a "verbose" parameter can be added to the Layer Class and influence the behavior of print_params() method.
Why should we re-invent the wheel ? Everything is already implemented in Tensorflow. We can use the logging level already existing in TF.
tf.logging._level_names ## outputs => {50: 'FATAL', 40: 'ERROR', 30: 'WARN', 20: 'INFO', 10: 'DEBUG'}
tf.logging.get_verbosity() ## outputs => 30 (default value)
tf.logging.set_verbosity(tf.logging.DEBUG)
tf.logging.get_verbosity() ## outputs => 10
We could for instance determine that for logging level <= 20 (INFO & DEBUG), we normally output the Tensorlayer informations, and we don't perform this action for any higher value.
Code is full of print statements. Please use logging so that one can intercept output and control noisiness.