Closed nicolaspi closed 3 weeks ago
Hi @nicolaspi -
Thanks for reporting this issue. Here getting error because tf.function used with AutoGraph disabled. And in keras3 AutoGraph disabled by default. Here can find more details about AutoGraph.
You need to use use Eager execution mode model.compile(steps_per_execution=33,run_eagerly=True)
to enable AutoGraph in keras3.
Attached gist for the reference.
Hi @mehtamansi29
Thanks for the answer.
The issue arises from a protection heuristic defined in tf.data
here . The protection is disabled when using eager mode, but this is not a viable solution due to the performance impact.
My solution was to override make_train_function and replace:
@tf.autograph.experimental.do_not_convert
def multi_step_on_iterator(iterator):
for _ in range(self.steps_per_execution):
outputs = one_step_on_iterator(iterator)
return outputs
to
# @tf.autograph.experimental.do_not_convert
def multi_step_on_iterator(iterator):
for _ in tf.range(self.steps_per_execution):
outputs = one_step_on_iterator(iterator)
return outputs
(Notice the range
-> tf.range
to prevent autograph from unrolling the for loop and make it convert into a while_loop
instead).
Got same issue
Manual limiting steps_per_execution with min(32, wanted_steps_per_execution)
works well as temporary workaround
Training using a
tf.data.Dataset
andsteps_per_execution
> 32 fails with:ValueError: An unusually high number of `tf.data.Iterator.get_next()` calls was detected. This suggests that the `for elem in dataset: ...` idiom is used within tf.function with AutoGraph disabled. This idiom is only supported when AutoGraph is enabled.
Reproduction code: