Closed faysou closed 5 years ago
What level of support do you need? Because you can always import tf.comp.v1 as tf and be fine with the old code, but without new features.
On Thu, 7 Nov 2019 at 07:42, faysou notifications@github.com wrote:
Is there any plan to migrate the library to support Tensorflow 2 ? What would need to be changed in the library for this ?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Bihaqo/t3f/issues/190?email_source=notifications&email_token=ABK6V2PKXAREIYQ5RKGDZD3QSPBHFA5CNFSM4JKCWRE2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HXQZAPA, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABK6V2P5K4ZJ67UCVIVQR3DQSPBHFANCNFSM4JKCWREQ .
Ok thank you, then this should be enough for me. Great library by the way ! I've used it for tensor completion. Tensor trains are quite magical, I knew about HOSVD, but this is big step forward.
Thanks, glad someone uses it! :)
Note that you'll probably have to change the tf imports in the whole library (I know, annoying). I guess the simplest thing to do would be to fork the library and use "replace in all files" in some editor.
I just realized that I probably can always import compat.v1 even in older version, so I'll try making it the default: #191
Yes that would be nice, so your library works out of the box with the newer version of tf.
I think #191 should work, just don't forget to tf.disable_v2_behavior() (like in the tutorial: https://colab.research.google.com/github/Bihaqo/t3f/blob/tf2_dummy_support/docs/quick_start.ipynb)
Will probably merge it tomorrow
I'm trying to use Session instead of eager in order to use gradients in a complection algorithm where the rank of a tensor is increased gradually.
It seems hard in tensorflow 1 to avoid recomputations as well as manage variables.
Does tensorflow 2 avoid this issue as the recommended mode in it is eager evaluation ? Maybe this library could become easier to use in tensorflow 2, as using tf.Session really complexifies the code.
Can you show a code example please? If feels like you shouldn't need growing the ranks, but maybe I'm missing something :)
You can run t3f in eager mode, just do tf.enable_eager_execution() at the top of your file. But not everything is supported, e.g. t3f.gradients doesn't work in eager rmode ight now. It's not too hard to support it, but I'm not sure when I'll have time to do that.
On Fri, 15 Nov 2019 at 16:59, faysou notifications@github.com wrote:
I'm trying to use Session instead of eager in order to use gradients in an algorithm where the rank of a tensor is increased gradually.
It seems hard in tensorflow 1 to avoid recomputations.
Does tensorflow 2 avoid this issue as the recommended mode in it is eager evaluation ? Maybe this library could become easier to use in tensorflow 2, as using tf.Session really complexify the code.
— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/Bihaqo/t3f/issues/190?email_source=notifications&email_token=ABK6V2MXGLXSFYN5HRUKDTTQT3INZA5CNFSM4JKCWRE2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEGB2PI#issuecomment-554442045, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABK6V2POMP3CQMCPSH2XMLDQT3INZANCNFSM4JKCWREQ .
I created a separate issue about it to not to forget: #193
Hi Alexander, sorry I didn't see your message. You can have a look at the paper below, p14.
http://sma.epfl.ch/~anchpcommon/publications/ttcompletion.pdf
Fait enough, I agree it would be easier to do in tf 2.
Note though that even in TF 2 it would be super annoying because you have to use tf.function to compile pieces of code to make it run faster, and tf.function doesn't support anything but tf tensors (i.e. it doesn't support t3f objects).
So you would write something like
def step(tens):
# ... process
# Do something that increases ranks
tens = tens + 1
return tens
for i in range(num_iter):
tens = step(tens)
But then, to make it run reasonably fast you'll have to do use tf.function on step, but to do that you'll have to make it take a list of tf.Tensors as input and output a list of tf.Tensors, i.e. something like
@tf.function
def step(tens_cores):
# ... process
# Do something that increases ranks
tens = t3f.TensorTrain(tens_cores)
tens = tens + 1
return tens.tt_cores
for i in range(num_iter):
tens_cores = step(tens_cores)
And at this point you can do something like this with TF 1 as well, i.e.
def step(tens_cores):
# ... process
# Do something that increases ranks
tens = t3f.TensorTrain(tens_cores)
tens = tens + 1
return tens.tt_cores
next_iter_cores = step(tens)
with tf.Session() as sess:
for i in range(num_iter):
tens_cores = sess.run(next_iter_cores, feed_dict={tens.tt_cores[i]: tens_cores[i] for i in
range(len(tens_cores))})
Anyway, it's a bit ugly in both TF 1 and TF 2, but TF 2 indeed would be nicer :)
I'll take a look at implementing t3f.gradients in eager mode.
Great, thank you for your reply.
After thinking some more about it I'm less sure you can easely do that in tf 1..
On Wed, 20 Nov 2019 at 18:09, faysou notifications@github.com wrote:
Great, thank you for your reply.
— You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub https://github.com/Bihaqo/t3f/issues/190?email_source=notifications&email_token=ABK6V2NZPOSRB4RHKFB4ZPDQUV4L7A5CNFSM4JKCWRE2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEETMRTA#issuecomment-556189900, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABK6V2METRGC4ERSCXWBN7LQUV4L7ANCNFSM4JKCWREQ .
I'll wait for your v2 then :)
Your library could save a lot of energy in training, it should be used by more people for neural networks.
Ok, done :)
This is not submitted yet though, so please checkout this branch: #193
Waw, thank you. So to be clear your new commit is about the auto-diff with eager mode, not full support for tensorflow 2.
I'll try your new commit it, will normally speed up my code.
Better support is added in #201
Is there any plan to migrate the library to support Tensorflow 2 ? What would need to be changed in the library for this ?