Closed benlin1996 closed 3 years ago
I'm not sure I follow.
If you are trying to do something like this
a = tf.zerons([])
@tf.function def foo(): a += 1
foo()
it won't update a. a has to be a variable and you need to do a.assign_add(1) or a.assign(a + 1)
Thanks, we can successfully update tensor inside tf.function, but for some reasons, we have to update non-tensor inside tf.function, such as:
@tf.function def random1(): i = np.random.random() #we try to initialize i both inside or outside tf.function, but same problem exist tf.print("number_random",i) print("number_random",i) j = tf.random.uniform(shape=[1], minval=0, maxval=None, dtype=tf.float32, seed=None, name=None) tf.print("tensor_random",j) print("tensor_random",j) return i
for _ in range(3): x = random1()
The execute result is : number_random 0.9144701677650857 (output of print(i)) tensor_random Tensor("random_uniform:0", shape=(1,), dtype=float32) number_random 0.9144701677650857 (output of tf.print(i) only and print(i) will not execute after first loop) tensor_random [0.439750195] number_random 0.9144701677650857 tensor_random [0.499604702]
non-tensor i will not update after first loop. Although tensor j will update, but there will always be non-type in the first loop. We cant find the problem of this behavior.
Hope I describe this clearly. Thanks
You can't use numpy inside a tf.function. The function will be compiled once. If you have to use numpy inside a tf.function you need tf.numpy_function. E.g.
import numpy as np
import tensorflow as tf
@tf.function
def random1():
i = tf.numpy_function(np.random.random, [], [tf.float64]) #we try to initialize i both inside or outside tf.function, but same problem exist
tf.print("number_random",i)
# print("number_random",i)
j = tf.random.uniform(shape=[1], minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)
tf.print("tensor_random",j)
# print("tensor_random",j)
return i
for _ in range(3):
x = random1()
Thanks for your help
Thanks for this wonderful code which is very useful for our research. When we try to add few lines of codes in seed-rl to achieve our purpose, we found the following problem: We try to initialize an non-tensor variable outside a function which is tf.function(e.g. a=0). We try to update this variable inside inference function which is a tf.function (e.g. a+=1). after we execute this function in the actor side (using client.inference()), we found that when we call this variable again in the learner side (such as print(a)),the value of this non-tensor variable cannot be updated (remain as a=0).
We try hard to fix this, but this problem still cannot be fixed. We will be very grateful if you can give us some guidance.
Thanks