Open shazz opened 5 years ago
I just use the tensorboard to find overall architecture. But I can't find the neural units in each layer.
You can also find the architectures in TensorBoard, if you go to the Text
tab, you will find the architectures in real-time.
However, this is less than ideal. If you can suggest a nice way to display this in TensorBoard or in Text without depending on any third-party dependencies, I am open to suggestions. A model.summary
Keras API may be an option.
Marking as 'help-wanted' for anyone who wants to contribute.
I'm saving the model via export_saved_model and then use tensorflow.contrib.slim to print it
def save_model(adanet_estimator, model_dir, input_dimension):
def serving_input_fn():
inputs = {
"x": tf.placeholder(dtype=tf.float32,
shape=[None, input_dimension],
name="x")
}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
return adanet_estimator.estimator.export_saved_model(model_dir, serving_input_fn)
def print_model_architechture(model_dir):
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["serve"], model_dir)
model_vars = tf.trainable_variables()
slim.model_analyzer.analyze_vars(model_vars, print_info=True)
otherwise you can get the variables and print them yourself
def get_trainable_variables(model_dir):
model_vars = list()
with tf.Session(graph=tf.Graph()) as sess:
tf.saved_model.loader.load(sess, ["serve"], model_dir)
for var in tf.trainable_variables():
model_vars.append(var.eval())
return model_vars
This is an example for the slim printout
---------
Variables: name (type shape) [size]
---------
adanet/iteration_4/ensemble_t4_2_layer_dnn/weighted_subnetwork_4/subnetwork/dense/kernel:0 (float32_ref 448x128) [57344, bytes: 229376]
adanet/iteration_4/ensemble_t4_2_layer_dnn/weighted_subnetwork_4/subnetwork/dense/bias:0 (float32_ref 128) [128, bytes: 512]
adanet/iteration_4/ensemble_t4_2_layer_dnn/weighted_subnetwork_4/subnetwork/dense_1/kernel:0 (float32_ref 128x128) [16384, bytes: 65536]
adanet/iteration_4/ensemble_t4_2_layer_dnn/weighted_subnetwork_4/subnetwork/dense_1/bias:0 (float32_ref 128) [128, bytes: 512]
adanet/iteration_4/ensemble_t4_2_layer_dnn/weighted_subnetwork_4/subnetwork/dense_2/kernel:0 (float32_ref 128x128) [16384, bytes: 65536]
adanet/iteration_4/ensemble_t4_2_layer_dnn/weighted_subnetwork_4/subnetwork/dense_2/bias:0 (float32_ref 128) [128, bytes: 512]
Total size of variables: 90496
Total bytes of variables: 361984
@cicciobyte: FYI tf.trainable_variables
may not return the full picture, because internally AdaNet modifies that collection.
WOW thanks @cweil that a good hint :-) I've limited tensorflow knowledge, but is this true even if I've exported the adanet estimator via SavedModel, closed any python/tensorflow session, loaded the protobuf model? I just want to get the architecture of the final model.
The Source of Truth for the model architecture is the TensorFlow Graph in your exported SavedModel
. You can visualize something similar in TensorBoard's "Graph" tab. I'm actively thinking about an easier way of seeing the graph offline without needing to use TensorBoard, but it may be a ways away until we have that support. Perhaps using Keras Models under the hood would help...
@cweill Keras (upstream) has from keras.utils.vis_utils import plot_model
.
WOW thanks @cweil that a good hint :-) I've limited tensorflow knowledge, but is this true even if I've exported the adanet estimator via SavedModel, closed any python/tensorflow session, loaded the protobuf model? I just want to get the architecture of the final model.
I run into a parsing error when using your approach. With pbtxt and pb both.
I'm running into this problem now too. I haven't found any guide or hint on how to deal with the huge folder structure in the model_dir
@cweill Keras (upstream) has
from keras.utils.vis_utils import plot_model
.
While I use keras models in the definitions inside the Generator/Builder I am interested in the final architecure, not the individual subnets i could print in this way.
Any ideas on how to extract complete architecture info on the final ensemble from model_dir ?
Keras's plot_model
function does not work with the current implementation of AdaNet, since we use the tf.estimator.Estimator
API which does not support it.
Hi, I'd like to retrieve, during the evaluation the detailed architecture of the ensemble and its sub network.
Using
It gives some hints like:
Architecture: b"| b'simple_cnn' | b'simple_cnn' |"
but that's not that detailed. I'd like to have something likemodel.summary()
with keras.Is there a way to get the detailed architecture ?
Thanks !