Open adjhawar opened 2 years ago
While you could run in-place training as seen in this example, the current exported tensorflow saved model is for inference only, and does not include training variables.
Is it possible to modify the converter code and get the required support? If yes, then what changes will be required on the onnx-tensorflow converter side to support training on the converted model?
chinhuang007 could you please clarify what does "in-place training" mean? Could you also please clarify why does the example you've shared use tf1
and not tf2
?
I am able to convert onnx model to tensorflow. However, when I try to access trainable variables/variables from the model, it displays that the attribute is not present. Also, when I try to access the variables/trainable variables from model.signatures['serving_default'], that list is also empty.
How can I get the trainable variables/variables from the converted tensorflow model?