Closed linamede closed 7 years ago
Any tensor that is created inside a tf.cond
is marked as not fetchable, so to go around that you need to be outside the condition. There are two things you can do here:
tf.Print
to print in evaluation timeYou can put your print statement in the loop, when the memory matrix is returned from the condition (here).
When a session.run
is called on the return of a DNC instance's get_outputs()
it returns two things, the output and a partial memory view which contains some of the memory parameters (like the read/write wights). You can augment that memory view to return the memory matrix.
Here the current components of the memory view for each step are collected in a bunch of lists defined here, and here these lists are packed into tensors that hold the memory views for the whole sequence.
You just need to create a new list, collect the memory matrices in it ,and then pack it and you'll have the memory matrices in the memory view to inspect.
Feel free to reopen the issue if you have further questions.
Thank you for trying to help.
Using tf.Print(self.memory.memory_matrix,[self.memory.memory_matrix]) does not print anything.
Using self.memory.memory_matrix.eval() shows the error " You must feed a value for placeholder tensor 'input' with dtype float" Trying to feed the input placeholder with zeros
self.input_data = tf.placeholder(tf.float32, [batch_size, None, input_size], name='input') self.input_data=tf.zeros([batch_size, None, input_size], dtype=tf.float32, name=None)
shows this error
ValueError: Cannot convert a partially known TensorShape to a Tensor: (1, ?, 6)
Trying the second suggested solution, in outputs, memory_views = ncomputer.get_outputs()
print (outputs)
<tf.Tensor 'Slice_1:0' shape=(?, ?, ?) dtype=float32>
print (memory_views)
{'allocation_gates': <tf.Tensor 'Slice_3:0' shape=(?, ?, ?) dtype=float32>, 'write_weightings': <tf.Tensor 'Slice_6:0' shape=(?, ?, ?) dtype=float32>, 'free_gates': <tf.Tensor 'Slice_2:0' shape=(?, ?, ?) dtype=float32>, 'write_gates': <tf.Tensor 'Slice_4:0' shape=(?, ?, ?) dtype=float32>, 'memory_matrices': <tf.Tensor 'Slice_2:0' shape=(?, ?, ?, ?) dtype=float32> 'read_weightings': <tf.Tensor 'Slice_5:0' shape=(?, ?, ?, ?) dtype=float32>, 'usage_vectors': <tf.Tensor 'Slice_7:0' shape=(?, ?, ?) dtype=float32>}
Why is there a questionmark in the shape of these tensors?
Trying to print(outputs.eval()) leads to the error
"You must feed a value for placeholder tensor 'input' with dtype float" as above
You're welcome. You seem to have some misconceptions about how TensorFlow APIs work.
tf.Print
A call to tf.Print
doesn't actually print, just like any other tf operation that doesn't perfrom the operation on call, it just creates a graph node that will print the given message when evaluation passes through it. If you just create a tf.Print
node and your evaluation graph doesn't go through it, it won't work.
To make your print statement work, you have two options:
tf.Print
works as an identity operation with a side effect of printing a message, that is it will return the same tensor you pass as the first argument. You can utilize this fact and edit this line as follows:
self.memory.memory_matrix = tf.Print(output_list[2], [output_list[2]])
dependencies = [
tf.identity(output_list[0]),
# tf.zeros(1) is just a dummy tensor to return
tf.Print(tf.zeros(1), [self.memory.memory_matrix])
]
This should get your print to work.
self.memory.memory_matrix.eval()
errorI do not know if you're feeding the placeholder in the same way you put in your last comment, that is via the assignment:
self.input_data=tf.zeros([batch_size, None, input_size], dtype=tf.float32, name=None)
If you're doing that, then this wrong and you need to review how tf works. You probably need to read the basic usage guide to see how the model is run.
This is probably due to the fact that these tensors are results of tf.slice
operations in which the beginning and the end of the slice is determined in evaluation time and is not known beforehand.
With your precise instructions I managed to print the memory contents. I really appreciate your help!
When I try to run the contents of the motebook, as a .py file and print the contents of memory it shows the below error. How can I overcome the problem of printing a tensor that has been marked as 'not fetchable'?
Dynamic Memory Mechanisms Trained on Length-2 Series ('ckpts_dir:', 'checkpoints') I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:925] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties: name: GeForce GTX 960 major: 5 minor: 2 memoryClockRate (GHz) 1.253 pciBusID 0000:01:00.0 Total memory: 3.94GiB Free memory: 3.61GiB I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0 I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0: Y I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0) init dnc.py in init memory memory.py init controller.py /usr/local/lib/python2.7/dist-packages/numpy/core/_methods.py:29: VisibleDeprecationWarning: using a non-integer number instead of an integer will result in an error in the future return umr_minimum(a, axis, None, out, keepdims) get_nn_output_size controller.py build_graph dnc.py _step_op dnc.py process_input controller.py parse_interface_vector controller.py write memory.py get_lookup_weighting memory.py update_usage_vector memory.py get_allocation_weighting memory.py update_write_weighting memory.py update_memory memory.py ('updated_memory', <tf.Tensor 'sequence_loop/cond/add_16:0' shape=(1, 10, 10) dtype=float32>) I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 960, pci bus id: 0000:01:00.0) Traceback (most recent call last): File "/home/user/Documents/deep_learning_frameworks/DNC-tensorflow/tasks/copy/test_dnc.py", line 127, in
batch_size=1
File "/home/user/Documents/deep_learning_frameworks/DNC-tensorflow/tasks/copy/dnc/dnc.py", line 68, in init
self.build_graph()
File "/home/user/Documents/deep_learning_frameworks/DNC-tensorflow/tasks/copy/dnc/dnc.py", line 195, in build_graph
lambda: self._dummy_op(controller_state)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1710, in cond
orig_res, res_t = context_t.BuildCondBranch(fn1)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/control_flow_ops.py", line 1613, in BuildCondBranch
r = fn()
File "/home/user/Documents/deep_learning_frameworks/DNC-tensorflow/tasks/copy/dnc/dnc.py", line 193, in
lambda: self._step_op(step, controller_state),
File "/home/user/Documents/deep_learning_frameworks/DNC-tensorflow/tasks/copy/dnc/dnc.py", line 102, in _step_op
interface['erase_vector']
File "/home/user/Documents/deep_learning_frameworks/DNC-tensorflow/tasks/copy/dnc/memory.py", line 340, in write
memory_matrix = self.update_memory(write_weighting, write_vector, erase_vector)
File "/home/user/Documents/deep_learning_frameworks/DNC-tensorflow/tasks/copy/dnc/memory.py", line 184, in update_memory
session.run(model)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 717, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 902, in _run
fetch_handler = _FetchHandler(self._graph, fetches, feed_dict_string)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 367, in init
self._assert_fetchable(graph, fetch)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 382, in _assert_fetchable
'Operation %r has been marked as not fetchable.' % op.name)
ValueError: Operation u'sequence_loop/cond/init' has been marked as not fetchable.
[Finished in 1.4s with exit code 1]