tensorlayer / TensorLayer

Deep Learning and Reinforcement Learning Library for Scientists and Engineers
http://tensorlayerx.com
Other
7.33k stars 1.61k forks source link

Use logging instead of print statements #207

Closed mitar closed 6 years ago

mitar commented 7 years ago

Code is full of print statements. Please use logging so that one can intercept output and control noisiness.

haiy commented 7 years ago

+1

zsdonghao commented 7 years ago

Hi, to disable the printing, you can use:

>>> print("You can see me")
>>> with tl.ops.suppress_stdout():
>>>     print("You can't see me")
>>> print("You can see me")
haiy commented 7 years ago

hi @zsdonghao. it's not about how to suppress print, it's about logging info with more control.If we use logging, then we can config what and where to output .When we want to deploy tl to production environment as service, we need it to log the normal and exception messages.Besides we found some globals defined in layers.py

set_keep = globals()
set_keep['_layers_name_list'] =[]
set_keep['name_reuse'] = False

,this piece of code make tl hard to deploy as service for multiple users.

mitar commented 7 years ago

I second everything @haiy said. Both of those two pieces have bitten us as well.

zsdonghao commented 7 years ago

@haiy For production, as I know, people usually use TensorFlow Serving, threading is not necessary. May be I didn't get you point? or there are some reasons threading is better?

mitar commented 7 years ago

Sometimes threading is used under you. See here and here.

zsdonghao commented 7 years ago

@mitar Thanks, I will think about it carefully ~

tomtung commented 7 years ago

Totally agree with the concerns over using print instead of logging. Things like suppress_stdout and disable_print suppresses stdout globally regardless of whether the prints are from the library, which feels quite overreaching. Logging seems to be more suitable here; see When to use logging.

I think we could just add the following to all the modules that needs logging (to create a logger hierarchy):

import logging
logger = logging.getLogger(__name__)

And use logger.{debug, info, warning, ...} instead of print, giving users full control over how and the logs are written.

tomtung commented 7 years ago

Also not sure why we need global variables to prevent layer name collisions, since after all the use of TensorFlow's variable scopes already handles that.

Maybe it's also easier to just pass reuse as a parameter to the constructors of layers, instead of maintaining it as the global state set_keep['name_reuse'], which is almost not respected anywhere (except for TimeDistributedLayer) anyways.

luomai commented 6 years ago

@tomtung @mitar @haiy we are preparing for a PR to replace all the print with logging in the library.

DEKHTIARJonathan commented 6 years ago

Just to repeat what I proposed in #306

The idea would be to make this output optional (default = True or False). I think there could be different ways to do this.

1. Solution - Create a verbose parameter in the Layer API

Simple and backward compatible, a "verbose" parameter can be added to the Layer Class and influence the behavior of print_params() method.

2. Use the Logging module from TF.

Why should we re-invent the wheel ? Everything is already implemented in Tensorflow. We can use the logging level already existing in TF.

tf.logging._level_names    ## outputs => {50: 'FATAL', 40: 'ERROR', 30: 'WARN', 20: 'INFO', 10: 'DEBUG'}
tf.logging.get_verbosity() ## outputs => 30 (default value)

tf.logging.set_verbosity(tf.logging.DEBUG)
tf.logging.get_verbosity() ## outputs => 10

We could for instance determine that for logging level <= 20 (INFO & DEBUG), we normally output the Tensorlayer informations, and we don't perform this action for any higher value.