Closed frallain closed 5 years ago
+1 I'd like to have logs streamed according to their LEVEL as well. Currently everything seems to go to stderr
In fact I figured out the pattern :
yyyy-MM-dd HH:mm:ss.SSSSSS: LEVELLETTER
such as 2018-11-06 15:46:56.512619: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
I
for INFO
, W
fo WARNING
, F
for FAILURE
, etc...
But there are some inconsistencies, for e.g. let's take the logs output when initializing a tensorflow serving instance :
2018-11-06 15:46:56.512619: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2018-11-06 15:46:56.512801: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: feeding
2018-11-06 15:46:56.512821: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: drinking
2018-11-06 15:46:56.613329: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: feeding version: 1}
2018-11-06 15:46:56.613410: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: feeding version: 1}
2018-11-06 15:46:56.613433: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: feeding version: 1}
2018-11-06 15:46:56.613469: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /models/feeding/1
2018-11-06 15:46:56.613489: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/feeding/1
2018-11-06 15:46:56.676171: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2018-11-06 15:46:56.713245: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: drinking version: 1}
2018-11-06 15:46:56.713337: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: drinking version: 1}
2018-11-06 15:46:56.713371: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: drinking version: 1}
2018-11-06 15:46:56.713406: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /models/drinking/1
2018-11-06 15:46:56.713425: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/drinking/1
2018-11-06 15:46:56.756574: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-11-06 15:46:56.765693: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2018-11-06 15:46:57.594782: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:964] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-11-06 15:46:57.595155: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1411] Found device 0 with properties:
name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
pciBusID: 0000:03:00.0
totalMemory: 7.43GiB freeMemory: 7.31GiB
2018-11-06 15:46:57.595184: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2018-11-06 15:47:01.916462: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-11-06 15:47:01.916539: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2018-11-06 15:47:01.916556: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2018-11-06 15:47:01.923521: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7051 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:03:00.0, compute capability: 6.1)
2018-11-06 15:47:02.150568: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1490] Adding visible gpu devices: 0
2018-11-06 15:47:02.150787: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:971] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-11-06 15:47:02.150858: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:977] 0
2018-11-06 15:47:02.150954: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0: N
2018-11-06 15:47:02.151195: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1103] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7051 MB memory) -> physical GPU (device: 0, name: Tesla P4, pci bus id: 0000:03:00.0, compute capability: 6.1)
2018-11-06 15:47:02.327657: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:162] Restoring SavedModel bundle.
2018-11-06 15:47:02.328723: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:162] Restoring SavedModel bundle.
2018-11-06 15:47:02.835821: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:138] Running MainOp with key legacy_init_op on SavedModel bundle.
2018-11-06 15:47:02.835897: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: success. Took 6122461 microseconds.
2018-11-06 15:47:02.835953: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /models/drinking/1/assets.extra/tf_serving_warmup_requests
2018-11-06 15:47:02.836927: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: drinking version: 1}
2018-11-06 15:47:02.850993: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:138] Running MainOp with key legacy_init_op on SavedModel bundle.
2018-11-06 15:47:02.851063: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:259] SavedModel load for tags { serve }; Status: success. Took 6237569 microseconds.
2018-11-06 15:47:02.851119: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /models/feeding/1/assets.extra/tf_serving_warmup_requests
2018-11-06 15:47:02.852328: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: feeding version: 1}
2018-11-06 15:47:02.874767: I tensorflow_serving/model_servers/server.cc:285] Running gRPC ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2018-11-06 15:47:02.884170: I tensorflow_serving/model_servers/server.cc:301] Exporting HTTP/REST API at:localhost:8501 ...
[evhttp_server.cc : 235] RAW: Entering the event loop ...
Some loglines are printed without any headers :
name: Tesla P4 major: 6 minor: 1 memoryClockRate(GHz): 1.1135
pciBusID: 0000:03:00.0
totalMemory: 7.43GiB freeMemory: 7.31GiB
Some loglines are printed with inconsistent headers:
[warn] getaddrinfo: address family for nodename not supported
2018-11-06 15:47:02.884170: I tensorflow_serving/model_servers/server.cc:301] Exporting HTTP/REST API at:localhost:8501 ...
[evhttp_server.cc : 235] RAW: Entering the event loop ...
In fact I figured out the pattern :
yyyy-MM-dd HH:mm:ss.SSSSSS: LEVELLETTER
such as2018-11-06 15:46:56.512619: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
I
forINFO
,W
foWARNING
,F
forFAILURE
, etc...
And 'E' for 'ERROR'. It is here.
For future readers of this FR, as frallain@ figured out, there are places where log levels are used. Yet where code does not use LOG() macro need to be updated. We would appreciate community PRs for this.
No loglevels seem to printed in logs, such as
INFO
,WARNING
,ERROR
,DEBUG
and evenDEBUG1
,DEBUG2
.. for the various levels of VLOG.Also it would be nice to not print everything to stderr but to stdout and let just the
ERROR
level to stderr. Or to let the choice to the user what goes to stderr.