Open pkgoogle opened 21 hours ago
This issue originally reported by @DLumi has been moved to this dedicated repository for LiteRT to enhance issue tracking and prioritization. To ensure continuity, we have created this new issue on your behalf.
We appreciate your understanding and look forward to your continued involvement.
Original Issue: https://github.com/tensorflow/tensorflow/issues/57977 Opening on behalf of @DLumi
1. System information
2. Code
Please note that the issue is only noticeable on Windows machines (tested on 3 different PCs). In Colab and on a Linux machine I saw little to no decline in performance. https://colab.research.google.com/drive/1d6E3VjbN57ojDd1X0sfG2KA3x7wTMf5N?usp=sharing
3. Failure after conversion
Model fails to convert with default operation set. Conversion is successful with the extended operation set, however I saw about x3 decline in performance during inference on CPU.
4. (optional) RNN conversion support
If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.
5. (optional) Any other info / logs
The conversion error traceback can be seen in the Colab notebook above. The issue is also present in TF 2.9.1, and it happens for both Intel and AMD CPUs.