Closed Hvass-Labs closed 8 years ago
Unfortunately, these graphs have many, many nodes in them. The namespace can help and the result of the activation would be prefix/layer_name/relu
so you can find it that way as a one-off. I've outlined a few more choices below.
The easiest way is to assign the values back to a variable and do something with it, of course this ends up introducing a lot of line noise:
x_pretty = x_pretty.conv2d(...)
do_something_with_image(x_pretty)
x_pretty = x_pretty.max_pool()
I would find that solution to be less than ideal, so I will lay out a couple of other ones:
with pt.defaults_scope(activation_fn=tf.nn.relu):
seq = x_pretty.sequential() # Now each call changes seq.
seq.conv2d(kernel=5, depth=64, name='layer_conv1')
do_something(seq.as_layer()) # as_layer takes a snapshot
seq.max_pool(kernel=2, stride=2)
seq.conv2d(kernel=5, depth=64, name='layer_conv2')
do_something(seq.as_layer()) # as_layer takes a snapshot
seq.max_pool(kernel=2, stride=2).flatten()
seq.fully_connected(size=256, name='layer_fc1')
do_something_else(seq.as_layer())
seq.fully_connected(size=128, name='layer_fc2')
do_something_else(seq.as_layer())
y_pred, loss = seq.as_layer().softmax_classifier(class_count=10, labels=y_true)
You could also use the callback _method_complete
on any Pretty Tensor object. It is called at the end of each method call in order to support side-effectful execution (sequential) or the standard execution; the type coming in can be anything Tensor like, so you'd have to call super for it to be wrapped as a PT.
If you feel like this would be a useful general abstraction to allow a user-defined callback, then I welcome the contribution. It would probably be a good way to standardize summaries as well :)
Thanks for the quick answer!
The reason I like Pretty Tensor is the elegant syntax when using the chained-mode of constructing the network, so I don't want to ruin that.
I think it would be OK for my project if I just get the output of the layers using their names. However, when I try the following (note that I have actually enclosed the above code in the namespace 'network' in my own code):
bar = tf.get_default_graph().get_tensor_by_name('network/layer_conv1/Relu')
print(bar)
I get this error:
ValueError: The name 'network/layer_conv1/Relu' refers to an Operation, not a Tensor. Tensor names must be of the form "<op_name>:<output_index>".
If instead I have:
bar = tf.get_default_graph().get_operation_by_name('network/layer_conv1/Relu')
print(bar)
I get the following output:
name: "network/layer_conv1/Relu"
op: "Relu"
input: "network/layer_conv1/add"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
I then try and execute this in the TensorFlow session to get the output of the convolutional layer:
baz = session.run(bar, feed_dict={x: images_test[0:10, :, :, :]})
print(baz)
But I just get None
as the result.
How should I do this? What is the reason?
Thanks again.
After some more time searching the internet, I found out how to do this. We need to append the code :0 to the name of the op to get its associated tensor. I don't know why, but here's how to do it:
bar = tf.get_default_graph().get_tensor_by_name('network/layer_conv1/Relu:0')
print(bar)
Which gives the following output:
Tensor("network/layer_conv1/Relu:0", shape=(?, 24, 24, 64), dtype=float32)
And we can now run the session to get the output of the convolutional layer as follows:
baz = session.run(bar, feed_dict={x: images_test[0:10, :, :, :]})
print(baz.shape)
Which outputs:
(10, 24, 24, 64)
And this represents:
[input_image, height, width, output_channel]
This took a long time to figure out and the solution is not obvious at all. One must know low-level details of both Pretty Tensor and TensorFlow to figure out how to do this. Please consider these things both when designing the API's and when documenting them.
For people who come across this with searching, the OP's tutorials are really helpful:
I hope it's alright that I ask this question here. I don't think it would get answered on StackOverflow and this forum is not very busy.
I define the following Convolutional Neural Network for CIFAR-10 using Pretty Tensor:
I want to extract the images that are output from the convolutional layers and plot them, not in TensorBoard, but using my own plotting.
Someone on StackOverflow gave the following code for printing the names of all the nodes in the graph:
But the list is really long and it's not clear to me which node-name represents the output of e.g. layer_conv1.
Is there an easy way of getting the images that are output from these layers? How about the tensors that are output from the fully-connected layers?
Thanks!