Open AdwaithAnand opened 3 weeks ago
Hi @AdwaithAnand, I don't see documentation specific for this repo. We do have documentation using an older workflow: https://ai.google.dev/edge/litert/models/ondevice_training. I will log this as a feature request for now.
Description of the bug:
I am currently working on a project that involves converting a trained PyTorch model to LiteRT and deploying to edge devices. I need the LiteRT model to have training capabilities (for all trainable layers) with the addition of appropriate signatures.
I need some guidance regarding the process of converting a PyTorch model into a TensorFlow or LiteRT model using pytorch_converter, while retaining training capabilities for edge-based fine-tuning.
Does the converter support the creation of a trainable graph of tensorflow or LiteRT during conversion from a pytorch model?
Actual vs expected behavior:
No response
Any other information you'd like to share?
I already tried setting the pytorch model to
.train()
mode and converting it. But, even though few additional layers were added when compared to the one converted in.eval
mode, the model generated is not actually trainable (frozen graph).