Traceback (most recent call last):
File "train3.py", line 74, in
model = WordCNN(vocabulary_size, WORD_MAX_LEN, NUM_CLASS)
File "/home/migueltuxd/bucket4testingtpuss/TPU/text-classification-models-tf-master/cnn_models/word_cnn.py", line 54, in init
self.optimizer = tf.train.AdamOptimizer(self.learning_rate).minimize(self.loss, global_step=self.global_step)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/optimizer.py", line 413, in minimize
name=name)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/optimizer.py", line 564, in apply_gradients
raise RuntimeError("Use _distributed_apply() instead of "
RuntimeError: Use _distributed_apply() instead of apply_gradients() in a cross-replica context.
Tensorflow 2.x Python 3.7.3
Traceback (most recent call last): File "train3.py", line 74, in
model = WordCNN(vocabulary_size, WORD_MAX_LEN, NUM_CLASS)
File "/home/migueltuxd/bucket4testingtpuss/TPU/text-classification-models-tf-master/cnn_models/word_cnn.py", line 54, in init
self.optimizer = tf.train.AdamOptimizer(self.learning_rate).minimize(self.loss, global_step=self.global_step)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/optimizer.py", line 413, in minimize
name=name)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/training/optimizer.py", line 564, in apply_gradients
raise RuntimeError("Use
_distributed_apply()
instead of " RuntimeError: Use_distributed_apply()
instead ofapply_gradients()
in a cross-replica context.Shared too on stackoverflow:https://stackoverflow.com/questions/61704387/wordcnn-trouble-with-distributed-apply-and-apply-gradients