Closed alanferrari95 closed 1 year ago
In "train.py", when I want to execute these lines of code:
model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=configs.learning_rate), loss=CTCloss(), metrics=[CWERMetric()], run_eagerly=False )
I'm having this trouble: "TypeError: init() missing 1 required positional argument: "padding_token""
I can see that "CWERMetrics()" came from "mltu.tensorflow.metrics".
In "metrics.py", I can see this in the "init" arguments: "(self, padding_token, name='CWER', **kwargs)"
Also I read:
self.padding_token = padding_token
And after that, in "update_state" method, I can prove that "self.padding_token" is used for the next line of code:
true_labels_sparse = tf.sparse.retain(true_labels_sparse, tf.not_equal(true_labels_sparse.values, self.padding_token))
How can I solve this problem? Thank you!
initialize CWERMetrics(padding_token = "padding token") usually, this should be the length of your characters dictionary
In "train.py", when I want to execute these lines of code:
model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=configs.learning_rate), loss=CTCloss(), metrics=[CWERMetric()], run_eagerly=False )
I'm having this trouble: "TypeError: init() missing 1 required positional argument: "padding_token""
I can see that "CWERMetrics()" came from "mltu.tensorflow.metrics".
In "metrics.py", I can see this in the "init" arguments: "(self, padding_token, name='CWER', **kwargs)"
Also I read:
Store the padding token as an attribute
And after that, in "update_state" method, I can prove that "self.padding_token" is used for the next line of code:
Retain only the non-padding elements in the true labels tensor
How can I solve this problem? Thank you!