Bihaqo / t3f

Tensor Train decomposition on TensorFlow
https://t3f.readthedocs.io/en/latest/index.html
MIT License
219 stars 55 forks source link

Difficulty in instantiating TensorTrain from tt_cores #113

Closed AustenLamacraft closed 6 years ago

AustenLamacraft commented 6 years ago

I have the following problem that I don't know how to get around... would really appreciate any advice.

I want to form model inputs into TT cores and use them to instantiate a TensorTrain (I can't do to_tt_tensor because the number of inputs is too large -- the full tensor would be huge)

However, since TensorTrain does tt_cores = list(tt_cores), I get the (Tensorflow) error

TypeError: `Tensor` objects are not iterable when eager execution is not enabled. To iterate over this tensor use `tf.map_fn`.

(tf.map_fn doesn't solve the problem, of course). Eager execution doesn't work at present:

import tensorflow as tf
import tensorflow.contrib.eager as tfe
import t3f

tfe.enable_eager_execution()

shape = (3, 4, 4, 5, 7, 5)
initialization = t3f.random_tensor(shape, tt_rank=5)
estimated = t3f.get_variable('estimated', initializer=initialization)

yielding

ValueError: ValueError: Variable estimated does not exist, or was not created with t3f.get_tt_variable(). Did you mean to set reuse=None in VarScope?
Bihaqo commented 6 years ago

So you want the output of your model to be a tensor train object, right? Can you do it by TensorTrain(ouput_of_your_model), where ouput_of_your_model is a list of the tensors of appropriate sizes? If your model returns you just a bunch of numbers you can slice them into chunks and reshape into TT-cores. E.g. original_output = tf.random_normal((100)) tt_cores[0] = tf.reshape(original_output[0:6], (1, 3, 2)) tt_cores[1] = tf.reshape(original_output[6:30], (2, 3, 4)) ...

ср, 28 февр. 2018 г., 17:27 Austen Lamacraft notifications@github.com:

I have the following problem that I don't know how to get around... would really appreciate any advice.

I want to form model inputs into TT cores and use them to instantiate a TensorTrain (I can't do to_tt_tensor because the number of inputs is too large -- the full tensor would be huge)

However, since TensorTrain does tt_cores = list(tt_cores), I get the (Tensorflow) error

TypeError: Tensor objects are not iterable when eager execution is not enabled. To iterate over this tensor use tf.map_fn.

(tf.map_fn doesn't solve the problem, of course). Eager execution doesn't work at present:

import tensorflow as tf import tensorflow.contrib.eager as tfe import t3f

tfe.enable_eager_execution()

shape = (3, 4, 4, 5, 7, 5) initialization = t3f.random_tensor(shape, tt_rank=5) estimated = t3f.get_variable('estimated', initializer=initialization)

yielding

ValueError: ValueError: Variable estimated does not exist, or was not created with t3f.get_tt_variable(). Did you mean to set reuse=None in VarScope?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Bihaqo/t3f/issues/113, or mute the thread https://github.com/notifications/unsubscribe-auth/AFXq6ZNZiecwhajFIMvjhJCIn1SX9vFRks5tZT9BgaJpZM4SWiXX .

AustenLamacraft commented 6 years ago

Thanks, that works. I was using a tfe.Iterator to get data, which produces Tensors, but I guess there's no need.

On the other hand, it would be great if t3f would work in eager mode. Do you have any idea how hard it is? I'd be happy to give it a try...

Bihaqo commented 6 years ago

Hm, I'll take a look, thanks for the request.

AustenLamacraft commented 6 years ago

It seems that a hacky solution is to change variables.py at line 58 from if reuse to if reuse is True as here. I suspect this will mess up variable reuse, though.

KhrulkovV commented 6 years ago

It seems that the problem is that if eager mode is enabled then reuse is always true, so the code tries to find the TensorTrainVariables collection and crashes. I tried to fix it as in the tensorflow code, e.g. if reuse and not context.in_eager_mode() and it seems to be working, will upload soon.

AustenLamacraft commented 6 years ago

tf.get_variable_scope().reuse = <_ReuseMode.AUTO_REUSE: 1> in eager mode, which is why if reuse was causing the original error, and why if reuse is True fixes it.

Bihaqo commented 6 years ago

114

cw-plus commented 6 years ago

Hi, for I can only use cuda 8.0, and install tensorflow below 1.5.0. How I can solve this problem? please help me.